Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
19,413
25,556,499,744
IssuesEvent
2022-11-30 07:14:18
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Wed, 30 Nov 22
event camera white balance compression image signal processing image signal process raw raw image events camera color contrast AWBISP events
## Keyword: events ### Post-training Quantization on Diffusion Models - **Authors:** Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, Yan Yan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15736 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15736 - **Abstract** Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM. ### Beyond Ensemble Averages: Leveraging Climate Model Ensembles for Subseasonal Forecasting - **Authors:** Elena Orlova, Haokun Liu, Raphael Rossellini, Benjamin Cash, Rebecca Willett - **Subjects:** Machine Learning (cs.LG); Atmospheric and Oceanic Physics (physics.ao-ph) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15856 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15856 - **Abstract** Producing high-quality forecasts of key climate variables such as temperature and precipitation on subseasonal time scales has long been a gap in operational forecasting. Recent studies have shown promising results using machine learning (ML) models to advance subseasonal forecasting (SSF), but several open questions remain. First, several past approaches use the average of an ensemble of physics-based forecasts as an input feature of these models. However, ensemble forecasts contain information that can aid prediction beyond only the ensemble mean. Second, past methods have focused on average performance, whereas forecasts of extreme events are far more important for planning and mitigation purposes. Third, climate forecasts correspond to a spatially-varying collection of forecasts, and different methods account for spatial variability in the response differently. Trade-offs between different approaches may be mitigated with model stacking. This paper describes the application of a variety of ML methods used to predict monthly average precipitation and two meter temperature using physics-based predictions (ensemble forecasts) and observational data such as relative humidity, pressure at sea level, or geopotential height, two weeks in advance for the whole continental United States. Regression, quantile regression, and tercile classification tasks using linear models, random forests, convolutional neural networks, and stacked models are considered. The proposed models outperform common baselines such as historical averages (or quantiles) and ensemble averages (or quantiles). This paper further includes an investigation of feature importance, trade-offs between using the full ensemble or only the ensemble average, and different modes of accounting for spatial variability. ### Distributed Energy Management and Demand Response in Smart Grids: A Multi-Agent Deep Reinforcement Learning Framework - **Authors:** Amin Shojaeighadikolaei, Arman Ghasemi, Kailani Jones, Yousif Dafalla, Alexandru G. Bardas, Reza Ahmadi, Morteza Haashemi - **Subjects:** Multiagent Systems (cs.MA); Machine Learning (cs.LG); Systems and Control (eess.SY) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15858 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15858 - **Abstract** This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems. In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users. DR has a widely recognized potential for improving power grid stability and reliability, while at the same time reducing end-users energy bills. However, the conventional DR techniques come with several shortcomings, such as the inability to handle operational uncertainties while incurring end-user disutility, which prevents widespread adoption in real-world applications. The proposed framework addresses these shortcomings by implementing DR and DEM based on real-time pricing strategy that is achieved using deep reinforcement learning. Furthermore, this framework enables the power grid service provider to leverage distributed energy resources (i.e., PV rooftop panels and battery storage) as dispatchable assets to support the smart grid during peak hours, thus achieving management of distributed energy resources. Simulation results based on the Deep Q-Network (DQN) demonstrate significant improvements of the 24-hour accumulative profit for both prosumers and the power grid service provider, as well as major reductions in the utilization of the power grid reserve generators. ### An Extreme-Adaptive Time Series Prediction Model Based on Probability-Enhanced LSTM Neural Networks - **Authors:** Yanhong Li, Jack Xu, David C. Anastasiu - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15891 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15891 - **Abstract** Forecasting time series with extreme events has been a challenging and prevalent research topic, especially when the time series data are affected by complicated uncertain factors, such as is the case in hydrologic prediction. Diverse traditional and deep learning models have been applied to discover the nonlinear relationships and recognize the complex patterns in these types of data. However, existing methods usually ignore the negative influence of imbalanced data, or severe events, on model training. Moreover, methods are usually evaluated on a small number of generally well-behaved time series, which does not show their ability to generalize. To tackle these issues, we propose a novel probability-enhanced neural network model, called NEC+, which concurrently learns extreme and normal prediction functions and a way to choose among them via selective back propagation. We evaluate the proposed model on the difficult 3-day ahead hourly water level prediction task applied to 9 reservoirs in California. Experimental results demonstrate that the proposed model significantly outperforms state-of-the-art baselines and exhibits superior generalization ability on data with diverse distributions. ### Finlay, Thames, Dufay, and Paget color screen process collections: Using digital registration of viewing screens to reveal original color - **Authors:** Geoffrey Barker, Jan Hubička, Mark Jacobs, Linda Kimrová, Kendra Meyer, Doug Peterson - **Subjects:** Graphics (cs.GR); Multimedia (cs.MM) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16076 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16076 - **Abstract** We discuss digitization, subsequent digital analysis and processing of negatives (and diapositives) made by Finlay, Thames, Dufay, Paget, and similar additive color screen processes. These early color processes (introduced in the 1890s and popular until the 1950s) used a special color screen filter and a monochromatic negative. Due to poor stability of dyes used to produce color screens many of the photographs appear faded; others exist only in the form of (monochromatic) negatives. We discuss the possibility of digitally reconstructing the original color from scans of original negatives or by virtue of infrared imaging of original transparencies (which eliminates the physically coupled color filters) and digitally recreating the original color filter pattern using a new open-source software tool. Photographs taken using additive color screen processes are some of the very earliest color images of our shared cultural heritage. They depict people, places, and events for which there are no other surviving color images. We hope that our new software tool can bring these images back to life. ### G-CMP: Graph-enhanced Contextual Matrix Profile for unsupervised anomaly detection in sensor-based remote health monitoring - **Authors:** Nivedita Bijlani, Oscar Mendez Maldonado, Samaneh Kouchaki - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16122 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16122 - **Abstract** Sensor-based remote health monitoring is used in industrial, urban and healthcare settings to monitor ongoing operation of equipment and human health. An important aim is to intervene early if anomalous events or adverse health is detected. In the wild, these anomaly detection approaches are challenged by noise, label scarcity, high dimensionality, explainability and wide variability in operating environments. The Contextual Matrix Profile (CMP) is a configurable 2-dimensional version of the Matrix Profile (MP) that uses the distance matrix of all subsequences of a time series to discover patterns and anomalies. The CMP is shown to enhance the effectiveness of the MP and other SOTA methods at detecting, visualising and interpreting true anomalies in noisy real world data from different domains. It excels at zooming out and identifying temporal patterns at configurable time scales. However, the CMP does not address cross-sensor information, and cannot scale to high dimensional data. We propose a novel, self-supervised graph-based approach for temporal anomaly detection that works on context graphs generated from the CMP distance matrix. The learned graph embeddings encode the anomalous nature of a time context. In addition, we evaluate other graph outlier algorithms for the same task. Given our pipeline is modular, graph construction, generation of graph embeddings, and pattern recognition logic can all be chosen based on the specific pattern detection application. We verified the effectiveness of graph-based anomaly detection and compared it with the CMP and 3 state-of-the art methods on two real-world healthcare datasets with different anomalies. Our proposed method demonstrated better recall, alert rate and generalisability. ### Physics Informed Neural Network for Dynamic Stress Prediction - **Authors:** Hamed Bolandi, Gautam Sreekumar, Xuyang Li, Nizar Lajnef, Vishnu Naresh Boddeti - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16190 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16190 - **Abstract** Structural failures are often caused by catastrophic events such as earthquakes and winds. As a result, it is crucial to predict dynamic stress distributions during highly disruptive events in real time. Currently available high-fidelity methods, such as Finite Element Models (FEMs), suffer from their inherent high complexity. Therefore, to reduce computational cost while maintaining accuracy, a Physics Informed Neural Network (PINN), PINN-Stress model, is proposed to predict the entire sequence of stress distribution based on Finite Element simulations using a partial differential equation (PDE) solver. Using automatic differentiation, we embed a PDE into a deep neural network's loss function to incorporate information from measurements and PDEs. The PINN-Stress model can predict the sequence of stress distribution in almost real-time and can generalize better than the model without PINN. ### Reasoning about Promises in Weak Memory Models with Event Structures (Extended Version) - **Authors:** Heike Wehrheim, Lara Bargmann, Brijesh Dongol - **Subjects:** Logic in Computer Science (cs.LO); Programming Languages (cs.PL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16330 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16330 - **Abstract** Modern processors such as ARMv8 and RISC-V allow executions in which independent instructions within a process may be reordered. To cope with such phenomena, so called promising semantics have been developed, which permit threads to read values that have not yet been written. Each promise is a speculative update that is later validated (fulfilled) by an actual write. Promising semantics are operational, providing a pathway for developing proof calculi. In this paper, we develop an incorrectness-style logic, resulting in a framework for reasoning about state reachability. Like incorrectness logic, our assertions are underapproximating, since the set of all valid promises are not known at the start of execution. Our logic uses event structures as assertions to compactly represent the ordering among events such as promised and fulfilled writes. We prove soundness and completeness of our proof calculus and demonstrate its applicability by proving reachability properties of standard weak memory litmus tests. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWBISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Post-training Quantization on Diffusion Models - **Authors:** Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, Yan Yan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15736 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15736 - **Abstract** Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM. ### Compressing Cross-Lingual Multi-Task Models at Qualtrics - **Authors:** Daniel Campos, Daniel Perry, Samir Joshi, Yashmeet Gambhir, Wei Du, Zhengzheng Xing, Aaron Colak - **Subjects:** Computation and Language (cs.CL); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15927 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15927 - **Abstract** Experience management is an emerging business area where organizations focus on understanding the feedback of customers and employees in order to improve their end-to-end experiences. This results in a unique set of machine learning problems to help understand how people feel, discover issues they care about, and find which actions need to be taken on data that are different in content and distribution from traditional NLP domains. In this paper, we present a case study of building text analysis applications that perform multiple classification tasks efficiently in 12 languages in the nascent business area of experience management. In order to scale up modern ML methods on experience data, we leverage cross lingual and multi-task modeling techniques to consolidate our models into a single deployment to avoid overhead. We also make use of model compression and model distillation to reduce overall inference latency and hardware cost to the level acceptable for business needs while maintaining model prediction quality. Our findings show that multi-task modeling improves task performance for a subset of experience management tasks in both XLM-R and mBert architectures. Among the compressed architectures we explored, we found that MiniLM achieved the best compression/performance tradeoff. Our case study demonstrates a speedup of up to 15.61x with 2.60% average task degradation (or 3.29x speedup with 1.71% degradation) and estimated savings of 44% over using the original full-size model. These results demonstrate a successful scaling up of text classification for the challenging new area of ML for experience management. ### Maximal Atomic irRedundant Sets: a Usage-based Dataflow Partitioning Algorithm - **Authors:** Corentin Ferry, Steven Derrien, Sanjay Rajopadhye - **Subjects:** Programming Languages (cs.PL); Distributed, Parallel, and Cluster Computing (cs.DC) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15933 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15933 - **Abstract** Programs admitting a polyhedral representation can be transformed in many ways for locality and parallelism, notably loop tiling. Data flow analysis can then compute dependence relations between iterations and between tiles. When tiling is applied, certain iteration-wise dependences cross tile boundaries, creating the need for inter-tile data communication. Previous work computes it as the flow-in and flow-out sets of iteration tiles. In this paper, we propose a partitioning of the flow-out of a tile into the maximal sets of iterations that are entirely consumed and incur no redundant storage or transfer. The computation is described as an algorithm and performed on a selection of polyhedral programs. We then suggest possible applications of this decomposition in compression and memory allocation. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### DBA: Efficient Transformer with Dynamic Bilinear Low-Rank Attention - **Authors:** Bosheng Qin, Juncheng Li, Siliang Tang, Yueting Zhuang - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16368 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16368 - **Abstract** Many studies have been conducted to improve the efficiency of Transformer from quadric to linear. Among them, the low-rank-based methods aim to learn the projection matrices to compress the sequence length. However, the projection matrices are fixed once they have been learned, which compress sequence length with dedicated coefficients for tokens in the same position. Adopting such input-invariant projections ignores the fact that the most informative part of a sequence varies from sequence to sequence, thus failing to preserve the most useful information that lies in varied positions. In addition, previous efficient Transformers only focus on the influence of sequence length while neglecting the effect of hidden state dimension. To address the aforementioned problems, we present an efficient yet effective attention mechanism, namely the Dynamic Bilinear Low-Rank Attention (DBA), which compresses the sequence length by input-sensitive dynamic projection matrices and achieves linear time and space complexity by jointly optimizing the sequence length and hidden state dimension while maintaining state-of-the-art performance. Specifically, we first theoretically demonstrate that the sequence length can be compressed non-destructively from a novel perspective of information theory, with compression matrices dynamically determined by the input sequence. Furthermore, we show that the hidden state dimension can be approximated by extending the Johnson-Lindenstrauss lemma, optimizing the attention in bilinear form. Theoretical analysis shows that DBA is proficient in capturing high-order relations in cross-attention problems. Experiments over tasks with diverse sequence length conditions show that DBA achieves state-of-the-art performance compared with various strong baselines while maintaining less memory consumption with higher speed. ### Compressing Volumetric Radiance Fields to 1 MB - **Authors:** Lingzhi Li, Zhen Shen, Zhongshu Wang, Li Shen, Liefeng Bo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16386 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16386 - **Abstract** Approximating radiance fields with volumetric grids is one of promising directions for improving NeRF, represented by methods like Plenoxels and DVGO, which achieve super-fast training convergence and real-time rendering. However, these methods typically require a tremendous storage overhead, costing up to hundreds of megabytes of disk space and runtime memory for a single scene. We address this issue in this paper by introducing a simple yet effective framework, called vector quantized radiance fields (VQRF), for compressing these volume-grid-based radiance fields. We first present a robust and adaptive metric for estimating redundancy in grid models and performing voxel pruning by better exploring intermediate outputs of volumetric rendering. A trainable vector quantization is further proposed to improve the compactness of grid models. In combination with an efficient joint tuning strategy and post-processing, our method can achieve a compression ratio of 100$\times$ by reducing the overall model size to 1 MB with negligible loss on visual quality. Extensive experiments demonstrate that the proposed framework is capable of achieving unrivaled performance and well generalization across multiple methods with distinct volumetric structures, facilitating the wide use of volumetric radiance fields methods in real-world applications. Code Available at \url{https://github.com/AlgoHunt/VQRF} ## Keyword: RAW ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations. ### Deep Semi-supervised Learning with Double-Contrast of Features and Semantics - **Authors:** Quan Feng, Jiayu Yao, Zhison Pan, Guojun Zhou - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15671 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15671 - **Abstract** In recent years, the field of intelligent transportation systems (ITS) has achieved remarkable success, which is mainly due to the large amount of available annotation data. However, obtaining these annotated data has to afford expensive costs in reality. Therefore, a more realistic strategy is to leverage semi-supervised learning (SSL) with a small amount of labeled data and a large amount of unlabeled data. Typically, semantic consistency regularization and the two-stage learning methods of decoupling feature extraction and classification have been proven effective. Nevertheless, representation learning only limited to semantic consistency regularization may not guarantee the separation or discriminability of representations of samples with different semantics; due to the inherent limitations of the two-stage learning methods, the extracted features may not match the specific downstream tasks. In order to deal with the above drawbacks, this paper proposes an end-to-end deep semi-supervised learning double contrast of semantic and feature, which extracts effective tasks specific discriminative features by contrasting the semantics/features of positive and negative augmented samples pairs. Moreover, we leverage information theory to explain the rationality of double contrast of semantics and features and slack mutual information to contrastive loss in a simpler way. Finally, the effectiveness of our method is verified in benchmark datasets. ### Superpoint Transformer for 3D Scene Instance Segmentation - **Authors:** Jiahao Sun, Chunmei Qing, Junpeng Tan, Xiangmin Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15766 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15766 - **Abstract** Most existing methods realize 3D instance segmentation by extending those models used for 3D object detection or 3D semantic segmentation. However, these non-straightforward methods suffer from two drawbacks: 1) Imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall 3D instance segmentation framework. 2) Existing method requires a time-consuming intermediate step of aggregation. To address these issues, this paper proposes a novel end-to-end 3D instance segmentation method based on Superpoint Transformer, named as SPFormer. It groups potential features from point clouds into superpoints, and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation. The key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross-attention mechanism and generate the superpoint masks of the instances. Through bipartite matching based on superpoint masks, SPFormer can implement the network training without the intermediate aggregation step, which accelerates the network. Extensive experiments on ScanNetv2 and S3DIS benchmarks verify that our method is concise yet efficient. Notably, SPFormer exceeds compared state-of-the-art methods by 4.3% on ScanNetv2 hidden test set in terms of mAP and keeps fast inference speed (247ms per frame) simultaneously. Code is available at https://github.com/sunjiahao1999/SPFormer. ### ClueWeb22: 10 Billion Web Documents with Rich Information - **Authors:** Arnold Overwijk, Chenyan Xiong, Xiao Liu, Cameron VandenBerg, Jamie Callan - **Subjects:** Information Retrieval (cs.IR); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15848 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15848 - **Abstract** ClueWeb22, the newest iteration of the ClueWeb line of datasets, provides 10 billion web pages affiliated with rich information. Its design was influenced by the need for a high quality, large scale web corpus to support a range of academic and industry research, for example, in information systems, retrieval-augmented AI systems, and model pretraining. Compared with earlier ClueWeb corpora, the ClueWeb22 corpus is larger, more varied, of higher-quality, and aligned with the document distributions in commercial web search. Besides raw HTML, ClueWeb22 includes rich information about the web pages provided by industry-standard document understanding systems, including the visual representation of pages rendered by a web browser, parsed HTML structure information from a neural network parser, and pre-processed cleaned document text to lower the barrier to entry. Many of these signals have been widely used in industry but are available to the research community for the first time at this scale. ### Neural Feature-Adaptation for Symbolic Predictions Using Pre-Training and Semantic Loss - **Authors:** Vedant Shah, Aditya Agrawal, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Tanmay Verlekar - **Subjects:** Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Logic in Computer Science (cs.LO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16047 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16047 - **Abstract** We are interested in neurosymbolic systems consisting of a high-level symbolic layer for explainable prediction in terms of human-intelligible concepts; and a low-level neural layer for extracting symbols required to generate the symbolic explanation. Real data is often imperfect meaning that even if the symbolic theory remains unchanged, we may still need to address the problem of mapping raw data to high-level symbols, each time there is a change in the data acquisition environment or equipment. Manual (re-)annotation of the raw data each time this happens is laborious and expensive; and automated labelling methods are often imperfect, especially for complex problems. NEUROLOG proposed the use of a semantic loss function that allows an existing feature-based symbolic model to guide the extraction of feature-values from raw data, using `abduction'. However, the experiments demonstrating the use of semantic loss through abduction appear to rely heavily on a domain-specific pre-processing step that enables a prior delineation of feature locations in the raw data. We examine the use of semantic loss in domains where such pre-processing is not possible, or is not obvious. We show that without any prior information about the features, the NEUROLOG approach can continue to predict accurately even with substantially incorrect feature predictions. We show also that prior information about the features in the form of even imperfect pre-training can help correct this situation. These findings are replicated on the original problem considered by NEUROLOG, without the use of feature-delineation. This suggests that symbolic explanations constructed for data in a domain could be re-used in a related domain, by `feature-adaptation' of pre-trained neural extractors using the semantic loss function constrained by abductive feedback. ### Behavior Estimation from Multi-Source Data for Offline Reinforcement Learning - **Authors:** Guoxi Zhang, Hisashi Kashima - **Subjects:** Machine Learning (cs.LG); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16078 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16078 - **Abstract** Offline reinforcement learning (RL) have received rising interest due to its appealing data efficiency. The present study addresses behavior estimation, a task that lays the foundation of many offline RL algorithms. Behavior estimation aims at estimating the policy with which training data are generated. In particular, this work considers a scenario where the data are collected from multiple sources. In this case, neglecting data heterogeneity, existing approaches for behavior estimation suffers from behavior misspecification. To overcome this drawback, the present study proposes a latent variable model to infer a set of policies from data, which allows an agent to use as behavior policy the policy that best describes a particular trajectory. This model provides with a agent fine-grained characterization for multi-source data and helps it overcome behavior misspecification. This work also proposes a learning algorithm for this model and illustrates its practical usage via extending an existing offline RL algorithm. Lastly, with extensive evaluation this work confirms the existence of behavior misspecification and the efficacy of the proposed model. ### Peculiarities of gender disambiguation and ordering of non-English authors' names for Economic papers beyond core databases - **Authors:** O. Mryglod, S. Nazarovets, S. Kozmenko - **Subjects:** Digital Libraries (cs.DL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16124 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16124 - **Abstract** This paper presents the results of further exploration of Crossref data related to Ukrainian Economics research (the first part can be found in [Mryglod, O., Nazarovets, S. & Kozmenko, S. (2021) Scientometrics, 126, 8187]). Our purpose is to supplement the quantitative portrait of Ukrainian Economics discipline with the results of gender and author ordering analysis at the level of individual authors, special methods of working with bibliographic data with a predominant share of non-English authors are used. The properties of gender mixing, the likelihood of male and female authors occupying the first position in the authorship list, as well as the arrangements of names are studied. A data set containing bibliographic records related to Ukrainian journal publications in the field of Economics is constructed using Crossref metadata. The described stages for working with such specific data help to work at the level of authors and analyse, in particular, gender issues. Despite the larger number of female authors, gender equality is more likely to be reported at the individual level for the discipline of Ukrainian Economics. The tendencies towards collaborative or solo-publications and gender mixing patterns are found to be dependent on the journal: the differences for publications indexed in Scopus and/or Web of Science databases are found. It has also been found that Ukrainian Economics research is characterized by rather a non-alphabetical order of authors. To our knowledge, this is the first large-scale quantitative study of Ukrainian Economic discipline. The results obtained are valuable not only at the national level, but also contribute to general knowledge about Economic research, gender issues and authors' names ordering. Here, for the first time, attention is drawn to the explicit use of the features of the Slavic authors' names. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### AdaEnlight: Energy-aware Low-light Video Stream Enhancement on Mobile Devices - **Authors:** Sicong Liu (Northwestern Polytechnical University, China), Xiaochen Li (Northwestern Polytechnical University, China), Zimu Zhou (City University of Hong Kong, China), Bin Guo (Northwestern Polytechnical University, China), Meng Zhang (Northwestern Polytechnical University, China), Haochen Shen (Northwestern Polytechnical University, China), Zhiwen Yu (Northwestern Polytechnical University, China) - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16135 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16135 - **Abstract** The ubiquity of camera-embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications. These applications often demand on-device processing of video streams to deliver real-time, high-quality services for privacy and robustness concerns. However, the performance of these applications is constrained by the raw video streams, which tend to be taken with small-aperture cameras of ubiquitous mobile platforms in dim light. Despite extensive low-light video enhancement solutions, they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets. In this paper, we propose AdaEnlight, an energy-aware low-light video stream enhancement system on mobile devices. It achieves real-time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform-imposed dynamic energy budgets. We report extensive experiments on diverse datasets, scenarios, and platforms and demonstrate the superiority of AdaEnlight compared with state-of-the-art low-light image and video enhancement solutions. ### Few-shot Query-Focused Summarization with Prefix-Merging - **Authors:** Ruifeng Yuan, Zili Wang, Ziqiang Cao, Wenjie Li - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16164 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16164 - **Abstract** Query-focused summarization has been considered as an important extension for text summarization. It aims to generate a concise highlight for a given query. Different from text summarization, query-focused summarization has long been plagued by the problem of lacking high-quality large-scale datasets. In this paper, we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few-shot learning in query-focused summarization. Here, we propose prefix-merging, a prefix-based pretraining strategy for few-shot learning in query-focused summarization. Drawn inspiration from prefix-tuning, we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query-focused summarization. With only a small amount of trainable parameters, prefix-merging outperforms fine-tuning on query-focused summarization. We further discuss the influence of different prefix designs and propose a visualized explanation for how prefix-merging works. ### DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model - **Authors:** Gwanghyun Kim, Se Young Chun - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16374 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16374 - **Abstract** Recent 3D generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed 3D shapes, but training them for diverse domains is challenging since it requires massive training images and their camera distribution information. Text-guided domain adaptation methods have shown impressive performance on converting the 2D generative model on one domain into the models on other domains with different styles by leveraging the CLIP (Contrastive Language-Image Pre-training), rather than collecting massive datasets for those domains. However, one drawback of them is that the sample diversity in the original generative model is not well-preserved in the domain-adapted generative models due to the deterministic nature of the CLIP text encoder. Text-guided domain adaptation will be even more challenging for 3D generative models not only because of catastrophic diversity loss, but also because of inferior text-image correspondence and poor image quality. Here we propose DATID-3D, a domain adaptation method tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain. Unlike 3D extensions of prior text-guided domain adaptation methods, our novel pipeline was able to fine-tune the state-of-the-art 3D generator of the source domain to synthesize high resolution, multi-view consistent images in text-guided targeted domains without additional data, outperforming the existing text-guided domain adaptation methods in diversity and text-image correspondence. Furthermore, we propose and demonstrate diverse 3D image manipulations such as one-shot instance-selected adaptation and single-view manipulated 3D reconstruction to fully enjoy diversity in text. ### Symmetry Detection in Trajectory Data for More Meaningful Reinforcement Learning Representations - **Authors:** Marissa D'Alonzo, Rebecca Russell - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16381 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16381 - **Abstract** Knowledge of the symmetries of reinforcement learning (RL) systems can be used to create compressed and semantically meaningful representations of a low-level state space. We present a method of automatically detecting RL symmetries directly from raw trajectory data without requiring active control of the system. Our method generates candidate symmetries and trains a recurrent neural network (RNN) to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry. The RNN discriminator's accuracy for each candidate reveals how symmetric the system is under that transformation. This information can be used to create high-level representations that are invariant to all symmetries on a dataset level and to communicate properties of the RL behavior to users. We show in experiments on two simulated RL use cases (a pusher robot and a UAV flying in wind) that our method can determine the symmetries underlying both the environment physics and the trained RL policy. ### Abstract Visual Reasoning with Tangram Shapes - **Authors:** Anya Ji, Noriyuki Kojima, Noah Rush, Alane Suhr, Wai Keen Vong, Robert D. Hawkins, Yoav Artzi - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16492 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16492 - **Abstract** We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram . ## Keyword: raw image ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations.
2.0
New submissions for Wed, 30 Nov 22 - ## Keyword: events ### Post-training Quantization on Diffusion Models - **Authors:** Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, Yan Yan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15736 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15736 - **Abstract** Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM. ### Beyond Ensemble Averages: Leveraging Climate Model Ensembles for Subseasonal Forecasting - **Authors:** Elena Orlova, Haokun Liu, Raphael Rossellini, Benjamin Cash, Rebecca Willett - **Subjects:** Machine Learning (cs.LG); Atmospheric and Oceanic Physics (physics.ao-ph) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15856 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15856 - **Abstract** Producing high-quality forecasts of key climate variables such as temperature and precipitation on subseasonal time scales has long been a gap in operational forecasting. Recent studies have shown promising results using machine learning (ML) models to advance subseasonal forecasting (SSF), but several open questions remain. First, several past approaches use the average of an ensemble of physics-based forecasts as an input feature of these models. However, ensemble forecasts contain information that can aid prediction beyond only the ensemble mean. Second, past methods have focused on average performance, whereas forecasts of extreme events are far more important for planning and mitigation purposes. Third, climate forecasts correspond to a spatially-varying collection of forecasts, and different methods account for spatial variability in the response differently. Trade-offs between different approaches may be mitigated with model stacking. This paper describes the application of a variety of ML methods used to predict monthly average precipitation and two meter temperature using physics-based predictions (ensemble forecasts) and observational data such as relative humidity, pressure at sea level, or geopotential height, two weeks in advance for the whole continental United States. Regression, quantile regression, and tercile classification tasks using linear models, random forests, convolutional neural networks, and stacked models are considered. The proposed models outperform common baselines such as historical averages (or quantiles) and ensemble averages (or quantiles). This paper further includes an investigation of feature importance, trade-offs between using the full ensemble or only the ensemble average, and different modes of accounting for spatial variability. ### Distributed Energy Management and Demand Response in Smart Grids: A Multi-Agent Deep Reinforcement Learning Framework - **Authors:** Amin Shojaeighadikolaei, Arman Ghasemi, Kailani Jones, Yousif Dafalla, Alexandru G. Bardas, Reza Ahmadi, Morteza Haashemi - **Subjects:** Multiagent Systems (cs.MA); Machine Learning (cs.LG); Systems and Control (eess.SY) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15858 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15858 - **Abstract** This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems. In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users. DR has a widely recognized potential for improving power grid stability and reliability, while at the same time reducing end-users energy bills. However, the conventional DR techniques come with several shortcomings, such as the inability to handle operational uncertainties while incurring end-user disutility, which prevents widespread adoption in real-world applications. The proposed framework addresses these shortcomings by implementing DR and DEM based on real-time pricing strategy that is achieved using deep reinforcement learning. Furthermore, this framework enables the power grid service provider to leverage distributed energy resources (i.e., PV rooftop panels and battery storage) as dispatchable assets to support the smart grid during peak hours, thus achieving management of distributed energy resources. Simulation results based on the Deep Q-Network (DQN) demonstrate significant improvements of the 24-hour accumulative profit for both prosumers and the power grid service provider, as well as major reductions in the utilization of the power grid reserve generators. ### An Extreme-Adaptive Time Series Prediction Model Based on Probability-Enhanced LSTM Neural Networks - **Authors:** Yanhong Li, Jack Xu, David C. Anastasiu - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15891 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15891 - **Abstract** Forecasting time series with extreme events has been a challenging and prevalent research topic, especially when the time series data are affected by complicated uncertain factors, such as is the case in hydrologic prediction. Diverse traditional and deep learning models have been applied to discover the nonlinear relationships and recognize the complex patterns in these types of data. However, existing methods usually ignore the negative influence of imbalanced data, or severe events, on model training. Moreover, methods are usually evaluated on a small number of generally well-behaved time series, which does not show their ability to generalize. To tackle these issues, we propose a novel probability-enhanced neural network model, called NEC+, which concurrently learns extreme and normal prediction functions and a way to choose among them via selective back propagation. We evaluate the proposed model on the difficult 3-day ahead hourly water level prediction task applied to 9 reservoirs in California. Experimental results demonstrate that the proposed model significantly outperforms state-of-the-art baselines and exhibits superior generalization ability on data with diverse distributions. ### Finlay, Thames, Dufay, and Paget color screen process collections: Using digital registration of viewing screens to reveal original color - **Authors:** Geoffrey Barker, Jan Hubička, Mark Jacobs, Linda Kimrová, Kendra Meyer, Doug Peterson - **Subjects:** Graphics (cs.GR); Multimedia (cs.MM) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16076 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16076 - **Abstract** We discuss digitization, subsequent digital analysis and processing of negatives (and diapositives) made by Finlay, Thames, Dufay, Paget, and similar additive color screen processes. These early color processes (introduced in the 1890s and popular until the 1950s) used a special color screen filter and a monochromatic negative. Due to poor stability of dyes used to produce color screens many of the photographs appear faded; others exist only in the form of (monochromatic) negatives. We discuss the possibility of digitally reconstructing the original color from scans of original negatives or by virtue of infrared imaging of original transparencies (which eliminates the physically coupled color filters) and digitally recreating the original color filter pattern using a new open-source software tool. Photographs taken using additive color screen processes are some of the very earliest color images of our shared cultural heritage. They depict people, places, and events for which there are no other surviving color images. We hope that our new software tool can bring these images back to life. ### G-CMP: Graph-enhanced Contextual Matrix Profile for unsupervised anomaly detection in sensor-based remote health monitoring - **Authors:** Nivedita Bijlani, Oscar Mendez Maldonado, Samaneh Kouchaki - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16122 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16122 - **Abstract** Sensor-based remote health monitoring is used in industrial, urban and healthcare settings to monitor ongoing operation of equipment and human health. An important aim is to intervene early if anomalous events or adverse health is detected. In the wild, these anomaly detection approaches are challenged by noise, label scarcity, high dimensionality, explainability and wide variability in operating environments. The Contextual Matrix Profile (CMP) is a configurable 2-dimensional version of the Matrix Profile (MP) that uses the distance matrix of all subsequences of a time series to discover patterns and anomalies. The CMP is shown to enhance the effectiveness of the MP and other SOTA methods at detecting, visualising and interpreting true anomalies in noisy real world data from different domains. It excels at zooming out and identifying temporal patterns at configurable time scales. However, the CMP does not address cross-sensor information, and cannot scale to high dimensional data. We propose a novel, self-supervised graph-based approach for temporal anomaly detection that works on context graphs generated from the CMP distance matrix. The learned graph embeddings encode the anomalous nature of a time context. In addition, we evaluate other graph outlier algorithms for the same task. Given our pipeline is modular, graph construction, generation of graph embeddings, and pattern recognition logic can all be chosen based on the specific pattern detection application. We verified the effectiveness of graph-based anomaly detection and compared it with the CMP and 3 state-of-the art methods on two real-world healthcare datasets with different anomalies. Our proposed method demonstrated better recall, alert rate and generalisability. ### Physics Informed Neural Network for Dynamic Stress Prediction - **Authors:** Hamed Bolandi, Gautam Sreekumar, Xuyang Li, Nizar Lajnef, Vishnu Naresh Boddeti - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16190 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16190 - **Abstract** Structural failures are often caused by catastrophic events such as earthquakes and winds. As a result, it is crucial to predict dynamic stress distributions during highly disruptive events in real time. Currently available high-fidelity methods, such as Finite Element Models (FEMs), suffer from their inherent high complexity. Therefore, to reduce computational cost while maintaining accuracy, a Physics Informed Neural Network (PINN), PINN-Stress model, is proposed to predict the entire sequence of stress distribution based on Finite Element simulations using a partial differential equation (PDE) solver. Using automatic differentiation, we embed a PDE into a deep neural network's loss function to incorporate information from measurements and PDEs. The PINN-Stress model can predict the sequence of stress distribution in almost real-time and can generalize better than the model without PINN. ### Reasoning about Promises in Weak Memory Models with Event Structures (Extended Version) - **Authors:** Heike Wehrheim, Lara Bargmann, Brijesh Dongol - **Subjects:** Logic in Computer Science (cs.LO); Programming Languages (cs.PL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16330 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16330 - **Abstract** Modern processors such as ARMv8 and RISC-V allow executions in which independent instructions within a process may be reordered. To cope with such phenomena, so called promising semantics have been developed, which permit threads to read values that have not yet been written. Each promise is a speculative update that is later validated (fulfilled) by an actual write. Promising semantics are operational, providing a pathway for developing proof calculi. In this paper, we develop an incorrectness-style logic, resulting in a framework for reasoning about state reachability. Like incorrectness logic, our assertions are underapproximating, since the set of all valid promises are not known at the start of execution. Our logic uses event structures as assertions to compactly represent the ordering among events such as promised and fulfilled writes. We prove soundness and completeness of our proof calculus and demonstrate its applicability by proving reachability properties of standard weak memory litmus tests. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWBISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Post-training Quantization on Diffusion Models - **Authors:** Yuzhang Shang, Zhihang Yuan, Bin Xie, Bingzhe Wu, Yan Yan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15736 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15736 - **Abstract** Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM. ### Compressing Cross-Lingual Multi-Task Models at Qualtrics - **Authors:** Daniel Campos, Daniel Perry, Samir Joshi, Yashmeet Gambhir, Wei Du, Zhengzheng Xing, Aaron Colak - **Subjects:** Computation and Language (cs.CL); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15927 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15927 - **Abstract** Experience management is an emerging business area where organizations focus on understanding the feedback of customers and employees in order to improve their end-to-end experiences. This results in a unique set of machine learning problems to help understand how people feel, discover issues they care about, and find which actions need to be taken on data that are different in content and distribution from traditional NLP domains. In this paper, we present a case study of building text analysis applications that perform multiple classification tasks efficiently in 12 languages in the nascent business area of experience management. In order to scale up modern ML methods on experience data, we leverage cross lingual and multi-task modeling techniques to consolidate our models into a single deployment to avoid overhead. We also make use of model compression and model distillation to reduce overall inference latency and hardware cost to the level acceptable for business needs while maintaining model prediction quality. Our findings show that multi-task modeling improves task performance for a subset of experience management tasks in both XLM-R and mBert architectures. Among the compressed architectures we explored, we found that MiniLM achieved the best compression/performance tradeoff. Our case study demonstrates a speedup of up to 15.61x with 2.60% average task degradation (or 3.29x speedup with 1.71% degradation) and estimated savings of 44% over using the original full-size model. These results demonstrate a successful scaling up of text classification for the challenging new area of ML for experience management. ### Maximal Atomic irRedundant Sets: a Usage-based Dataflow Partitioning Algorithm - **Authors:** Corentin Ferry, Steven Derrien, Sanjay Rajopadhye - **Subjects:** Programming Languages (cs.PL); Distributed, Parallel, and Cluster Computing (cs.DC) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15933 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15933 - **Abstract** Programs admitting a polyhedral representation can be transformed in many ways for locality and parallelism, notably loop tiling. Data flow analysis can then compute dependence relations between iterations and between tiles. When tiling is applied, certain iteration-wise dependences cross tile boundaries, creating the need for inter-tile data communication. Previous work computes it as the flow-in and flow-out sets of iteration tiles. In this paper, we propose a partitioning of the flow-out of a tile into the maximal sets of iterations that are entirely consumed and incur no redundant storage or transfer. The computation is described as an algorithm and performed on a selection of polyhedral programs. We then suggest possible applications of this decomposition in compression and memory allocation. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### DBA: Efficient Transformer with Dynamic Bilinear Low-Rank Attention - **Authors:** Bosheng Qin, Juncheng Li, Siliang Tang, Yueting Zhuang - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16368 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16368 - **Abstract** Many studies have been conducted to improve the efficiency of Transformer from quadric to linear. Among them, the low-rank-based methods aim to learn the projection matrices to compress the sequence length. However, the projection matrices are fixed once they have been learned, which compress sequence length with dedicated coefficients for tokens in the same position. Adopting such input-invariant projections ignores the fact that the most informative part of a sequence varies from sequence to sequence, thus failing to preserve the most useful information that lies in varied positions. In addition, previous efficient Transformers only focus on the influence of sequence length while neglecting the effect of hidden state dimension. To address the aforementioned problems, we present an efficient yet effective attention mechanism, namely the Dynamic Bilinear Low-Rank Attention (DBA), which compresses the sequence length by input-sensitive dynamic projection matrices and achieves linear time and space complexity by jointly optimizing the sequence length and hidden state dimension while maintaining state-of-the-art performance. Specifically, we first theoretically demonstrate that the sequence length can be compressed non-destructively from a novel perspective of information theory, with compression matrices dynamically determined by the input sequence. Furthermore, we show that the hidden state dimension can be approximated by extending the Johnson-Lindenstrauss lemma, optimizing the attention in bilinear form. Theoretical analysis shows that DBA is proficient in capturing high-order relations in cross-attention problems. Experiments over tasks with diverse sequence length conditions show that DBA achieves state-of-the-art performance compared with various strong baselines while maintaining less memory consumption with higher speed. ### Compressing Volumetric Radiance Fields to 1 MB - **Authors:** Lingzhi Li, Zhen Shen, Zhongshu Wang, Li Shen, Liefeng Bo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16386 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16386 - **Abstract** Approximating radiance fields with volumetric grids is one of promising directions for improving NeRF, represented by methods like Plenoxels and DVGO, which achieve super-fast training convergence and real-time rendering. However, these methods typically require a tremendous storage overhead, costing up to hundreds of megabytes of disk space and runtime memory for a single scene. We address this issue in this paper by introducing a simple yet effective framework, called vector quantized radiance fields (VQRF), for compressing these volume-grid-based radiance fields. We first present a robust and adaptive metric for estimating redundancy in grid models and performing voxel pruning by better exploring intermediate outputs of volumetric rendering. A trainable vector quantization is further proposed to improve the compactness of grid models. In combination with an efficient joint tuning strategy and post-processing, our method can achieve a compression ratio of 100$\times$ by reducing the overall model size to 1 MB with negligible loss on visual quality. Extensive experiments demonstrate that the proposed framework is capable of achieving unrivaled performance and well generalization across multiple methods with distinct volumetric structures, facilitating the wide use of volumetric radiance fields methods in real-world applications. Code Available at \url{https://github.com/AlgoHunt/VQRF} ## Keyword: RAW ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations. ### Deep Semi-supervised Learning with Double-Contrast of Features and Semantics - **Authors:** Quan Feng, Jiayu Yao, Zhison Pan, Guojun Zhou - **Subjects:** Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15671 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15671 - **Abstract** In recent years, the field of intelligent transportation systems (ITS) has achieved remarkable success, which is mainly due to the large amount of available annotation data. However, obtaining these annotated data has to afford expensive costs in reality. Therefore, a more realistic strategy is to leverage semi-supervised learning (SSL) with a small amount of labeled data and a large amount of unlabeled data. Typically, semantic consistency regularization and the two-stage learning methods of decoupling feature extraction and classification have been proven effective. Nevertheless, representation learning only limited to semantic consistency regularization may not guarantee the separation or discriminability of representations of samples with different semantics; due to the inherent limitations of the two-stage learning methods, the extracted features may not match the specific downstream tasks. In order to deal with the above drawbacks, this paper proposes an end-to-end deep semi-supervised learning double contrast of semantic and feature, which extracts effective tasks specific discriminative features by contrasting the semantics/features of positive and negative augmented samples pairs. Moreover, we leverage information theory to explain the rationality of double contrast of semantics and features and slack mutual information to contrastive loss in a simpler way. Finally, the effectiveness of our method is verified in benchmark datasets. ### Superpoint Transformer for 3D Scene Instance Segmentation - **Authors:** Jiahao Sun, Chunmei Qing, Junpeng Tan, Xiangmin Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15766 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15766 - **Abstract** Most existing methods realize 3D instance segmentation by extending those models used for 3D object detection or 3D semantic segmentation. However, these non-straightforward methods suffer from two drawbacks: 1) Imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall 3D instance segmentation framework. 2) Existing method requires a time-consuming intermediate step of aggregation. To address these issues, this paper proposes a novel end-to-end 3D instance segmentation method based on Superpoint Transformer, named as SPFormer. It groups potential features from point clouds into superpoints, and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation. The key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross-attention mechanism and generate the superpoint masks of the instances. Through bipartite matching based on superpoint masks, SPFormer can implement the network training without the intermediate aggregation step, which accelerates the network. Extensive experiments on ScanNetv2 and S3DIS benchmarks verify that our method is concise yet efficient. Notably, SPFormer exceeds compared state-of-the-art methods by 4.3% on ScanNetv2 hidden test set in terms of mAP and keeps fast inference speed (247ms per frame) simultaneously. Code is available at https://github.com/sunjiahao1999/SPFormer. ### ClueWeb22: 10 Billion Web Documents with Rich Information - **Authors:** Arnold Overwijk, Chenyan Xiong, Xiao Liu, Cameron VandenBerg, Jamie Callan - **Subjects:** Information Retrieval (cs.IR); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15848 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15848 - **Abstract** ClueWeb22, the newest iteration of the ClueWeb line of datasets, provides 10 billion web pages affiliated with rich information. Its design was influenced by the need for a high quality, large scale web corpus to support a range of academic and industry research, for example, in information systems, retrieval-augmented AI systems, and model pretraining. Compared with earlier ClueWeb corpora, the ClueWeb22 corpus is larger, more varied, of higher-quality, and aligned with the document distributions in commercial web search. Besides raw HTML, ClueWeb22 includes rich information about the web pages provided by industry-standard document understanding systems, including the visual representation of pages rendered by a web browser, parsed HTML structure information from a neural network parser, and pre-processed cleaned document text to lower the barrier to entry. Many of these signals have been widely used in industry but are available to the research community for the first time at this scale. ### Neural Feature-Adaptation for Symbolic Predictions Using Pre-Training and Semantic Loss - **Authors:** Vedant Shah, Aditya Agrawal, Lovekesh Vig, Ashwin Srinivasan, Gautam Shroff, Tanmay Verlekar - **Subjects:** Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Logic in Computer Science (cs.LO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16047 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16047 - **Abstract** We are interested in neurosymbolic systems consisting of a high-level symbolic layer for explainable prediction in terms of human-intelligible concepts; and a low-level neural layer for extracting symbols required to generate the symbolic explanation. Real data is often imperfect meaning that even if the symbolic theory remains unchanged, we may still need to address the problem of mapping raw data to high-level symbols, each time there is a change in the data acquisition environment or equipment. Manual (re-)annotation of the raw data each time this happens is laborious and expensive; and automated labelling methods are often imperfect, especially for complex problems. NEUROLOG proposed the use of a semantic loss function that allows an existing feature-based symbolic model to guide the extraction of feature-values from raw data, using `abduction'. However, the experiments demonstrating the use of semantic loss through abduction appear to rely heavily on a domain-specific pre-processing step that enables a prior delineation of feature locations in the raw data. We examine the use of semantic loss in domains where such pre-processing is not possible, or is not obvious. We show that without any prior information about the features, the NEUROLOG approach can continue to predict accurately even with substantially incorrect feature predictions. We show also that prior information about the features in the form of even imperfect pre-training can help correct this situation. These findings are replicated on the original problem considered by NEUROLOG, without the use of feature-delineation. This suggests that symbolic explanations constructed for data in a domain could be re-used in a related domain, by `feature-adaptation' of pre-trained neural extractors using the semantic loss function constrained by abductive feedback. ### Behavior Estimation from Multi-Source Data for Offline Reinforcement Learning - **Authors:** Guoxi Zhang, Hisashi Kashima - **Subjects:** Machine Learning (cs.LG); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16078 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16078 - **Abstract** Offline reinforcement learning (RL) have received rising interest due to its appealing data efficiency. The present study addresses behavior estimation, a task that lays the foundation of many offline RL algorithms. Behavior estimation aims at estimating the policy with which training data are generated. In particular, this work considers a scenario where the data are collected from multiple sources. In this case, neglecting data heterogeneity, existing approaches for behavior estimation suffers from behavior misspecification. To overcome this drawback, the present study proposes a latent variable model to infer a set of policies from data, which allows an agent to use as behavior policy the policy that best describes a particular trajectory. This model provides with a agent fine-grained characterization for multi-source data and helps it overcome behavior misspecification. This work also proposes a learning algorithm for this model and illustrates its practical usage via extending an existing offline RL algorithm. Lastly, with extensive evaluation this work confirms the existence of behavior misspecification and the efficacy of the proposed model. ### Peculiarities of gender disambiguation and ordering of non-English authors' names for Economic papers beyond core databases - **Authors:** O. Mryglod, S. Nazarovets, S. Kozmenko - **Subjects:** Digital Libraries (cs.DL) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16124 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16124 - **Abstract** This paper presents the results of further exploration of Crossref data related to Ukrainian Economics research (the first part can be found in [Mryglod, O., Nazarovets, S. & Kozmenko, S. (2021) Scientometrics, 126, 8187]). Our purpose is to supplement the quantitative portrait of Ukrainian Economics discipline with the results of gender and author ordering analysis at the level of individual authors, special methods of working with bibliographic data with a predominant share of non-English authors are used. The properties of gender mixing, the likelihood of male and female authors occupying the first position in the authorship list, as well as the arrangements of names are studied. A data set containing bibliographic records related to Ukrainian journal publications in the field of Economics is constructed using Crossref metadata. The described stages for working with such specific data help to work at the level of authors and analyse, in particular, gender issues. Despite the larger number of female authors, gender equality is more likely to be reported at the individual level for the discipline of Ukrainian Economics. The tendencies towards collaborative or solo-publications and gender mixing patterns are found to be dependent on the journal: the differences for publications indexed in Scopus and/or Web of Science databases are found. It has also been found that Ukrainian Economics research is characterized by rather a non-alphabetical order of authors. To our knowledge, this is the first large-scale quantitative study of Ukrainian Economic discipline. The results obtained are valuable not only at the national level, but also contribute to general knowledge about Economic research, gender issues and authors' names ordering. Here, for the first time, attention is drawn to the explicit use of the features of the Slavic authors' names. ### Trustless unknown-order groups - **Authors:** Samuel Dobson, Steven Galbraith, Benjamin Smith (GRACE) - **Subjects:** Cryptography and Security (cs.CR); Number Theory (math.NT) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16128 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16128 - **Abstract** Groups of unknown order are of major interest due to their applications including time-lock puzzles, verifiable delay functions, and accumulators. In this paper we focus on trustless setup: in this setting, the most popular unknown-order group construction is ideal class groups of imaginary quadratic fields. We argue that the full impact of Sutherland's generic group-order algorithm has not been recognised in this context, and show that group sizes currently being proposed in practice (namely, approximately 830 bits) do not meet the claimed security level. Instead, we claim that random group orders should be at least 3300 bits to meet a 128-bit security level. For ideal class groups this leads to discriminants of around 6656 bits, which are much larger than desirable. One drawback of class groups is that current approaches require approximately $2\log_2(N)$ bits to represent an element in a group of order N. We provide two solutions to mitigate this blow-up in the size of representations. First, we explain how an idea of Bleichenbacher can be used to compress class group elements to $(3/2)\log_2(N)$ bits. Second, we note that using Jacobians of hyperelliptic curves (in other words, class groups of quadratic function fields) allows efficient compression to the optimal element representation size of $\log_2(N)$ bits. We discuss point-counting approaches for hyperelliptic curves and argue that genus-3 curves are secure in the trustless unknown-order setting. We conclude that in practice, Jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level -- both in the group operation and in the size of the element representation. ### AdaEnlight: Energy-aware Low-light Video Stream Enhancement on Mobile Devices - **Authors:** Sicong Liu (Northwestern Polytechnical University, China), Xiaochen Li (Northwestern Polytechnical University, China), Zimu Zhou (City University of Hong Kong, China), Bin Guo (Northwestern Polytechnical University, China), Meng Zhang (Northwestern Polytechnical University, China), Haochen Shen (Northwestern Polytechnical University, China), Zhiwen Yu (Northwestern Polytechnical University, China) - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16135 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16135 - **Abstract** The ubiquity of camera-embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications. These applications often demand on-device processing of video streams to deliver real-time, high-quality services for privacy and robustness concerns. However, the performance of these applications is constrained by the raw video streams, which tend to be taken with small-aperture cameras of ubiquitous mobile platforms in dim light. Despite extensive low-light video enhancement solutions, they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets. In this paper, we propose AdaEnlight, an energy-aware low-light video stream enhancement system on mobile devices. It achieves real-time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform-imposed dynamic energy budgets. We report extensive experiments on diverse datasets, scenarios, and platforms and demonstrate the superiority of AdaEnlight compared with state-of-the-art low-light image and video enhancement solutions. ### Few-shot Query-Focused Summarization with Prefix-Merging - **Authors:** Ruifeng Yuan, Zili Wang, Ziqiang Cao, Wenjie Li - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16164 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16164 - **Abstract** Query-focused summarization has been considered as an important extension for text summarization. It aims to generate a concise highlight for a given query. Different from text summarization, query-focused summarization has long been plagued by the problem of lacking high-quality large-scale datasets. In this paper, we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few-shot learning in query-focused summarization. Here, we propose prefix-merging, a prefix-based pretraining strategy for few-shot learning in query-focused summarization. Drawn inspiration from prefix-tuning, we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query-focused summarization. With only a small amount of trainable parameters, prefix-merging outperforms fine-tuning on query-focused summarization. We further discuss the influence of different prefix designs and propose a visualized explanation for how prefix-merging works. ### DATID-3D: Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model - **Authors:** Gwanghyun Kim, Se Young Chun - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16374 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16374 - **Abstract** Recent 3D generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed 3D shapes, but training them for diverse domains is challenging since it requires massive training images and their camera distribution information. Text-guided domain adaptation methods have shown impressive performance on converting the 2D generative model on one domain into the models on other domains with different styles by leveraging the CLIP (Contrastive Language-Image Pre-training), rather than collecting massive datasets for those domains. However, one drawback of them is that the sample diversity in the original generative model is not well-preserved in the domain-adapted generative models due to the deterministic nature of the CLIP text encoder. Text-guided domain adaptation will be even more challenging for 3D generative models not only because of catastrophic diversity loss, but also because of inferior text-image correspondence and poor image quality. Here we propose DATID-3D, a domain adaptation method tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain. Unlike 3D extensions of prior text-guided domain adaptation methods, our novel pipeline was able to fine-tune the state-of-the-art 3D generator of the source domain to synthesize high resolution, multi-view consistent images in text-guided targeted domains without additional data, outperforming the existing text-guided domain adaptation methods in diversity and text-image correspondence. Furthermore, we propose and demonstrate diverse 3D image manipulations such as one-shot instance-selected adaptation and single-view manipulated 3D reconstruction to fully enjoy diversity in text. ### Symmetry Detection in Trajectory Data for More Meaningful Reinforcement Learning Representations - **Authors:** Marissa D'Alonzo, Rebecca Russell - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Robotics (cs.RO) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16381 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16381 - **Abstract** Knowledge of the symmetries of reinforcement learning (RL) systems can be used to create compressed and semantically meaningful representations of a low-level state space. We present a method of automatically detecting RL symmetries directly from raw trajectory data without requiring active control of the system. Our method generates candidate symmetries and trains a recurrent neural network (RNN) to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry. The RNN discriminator's accuracy for each candidate reveals how symmetric the system is under that transformation. This information can be used to create high-level representations that are invariant to all symmetries on a dataset level and to communicate properties of the RL behavior to users. We show in experiments on two simulated RL use cases (a pusher robot and a UAV flying in wind) that our method can determine the symmetries underlying both the environment physics and the trained RL policy. ### Abstract Visual Reasoning with Tangram Shapes - **Authors:** Anya Ji, Noriyuki Kojima, Noah Rush, Alane Suhr, Wai Keen Vong, Robert D. Hawkins, Yoav Artzi - **Subjects:** Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.16492 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.16492 - **Abstract** We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram . ## Keyword: raw image ### Learning Visual Planning Models from Partially Observed Images - **Authors:** Kebing Jin, Zhanhao Xiao, Hankui Hankz Zhuo, Hai Wan, Jiaran Cai - **Subjects:** Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://xxx.itp.ac.cn/abs/2211.15666 - **Pdf link:** https://xxx.itp.ac.cn/pdf/2211.15666 - **Abstract** There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations.
process
new submissions for wed nov keyword events post training quantization on diffusion models authors yuzhang shang zhihang yuan bin xie bingzhe wu yan yan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract denoising diffusion score based generative models have recently achieved significant accomplishments in generating realistic and diverse data these approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise unfortunately the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations which rely on cumbersome neural networks it prevents the diffusion models from being widely deployed especially on edge devices previous works accelerate the generation process of diffusion model dm via finding shorter yet effective sampling trajectories however they overlook the cost of noise estimation with a heavy network in every iteration in this work we accelerate generation from the perspective of compressing the noise estimation network due to the difficulty of retraining dms we exclude mainstream training aware compression paradigms and introduce post training quantization ptq into dm acceleration however the output distributions of noise estimation networks change with time step making previous ptq methods fail in dms since they are designed for single time step scenarios to devise a dm specific ptq method we explore ptq on dm in three aspects quantized operations calibration dataset and calibration metric we summarize and use several observations derived from all inclusive investigations to formulate our method which especially targets the unique multi time step structure of dms experimentally our method can directly quantize full precision dms into bit models while maintaining or even improving their performance in a training free manner importantly our method can serve as a plug and play module on other fast sampling methods e g ddim beyond ensemble averages leveraging climate model ensembles for subseasonal forecasting authors elena orlova haokun liu raphael rossellini benjamin cash rebecca willett subjects machine learning cs lg atmospheric and oceanic physics physics ao ph arxiv link pdf link abstract producing high quality forecasts of key climate variables such as temperature and precipitation on subseasonal time scales has long been a gap in operational forecasting recent studies have shown promising results using machine learning ml models to advance subseasonal forecasting ssf but several open questions remain first several past approaches use the average of an ensemble of physics based forecasts as an input feature of these models however ensemble forecasts contain information that can aid prediction beyond only the ensemble mean second past methods have focused on average performance whereas forecasts of extreme events are far more important for planning and mitigation purposes third climate forecasts correspond to a spatially varying collection of forecasts and different methods account for spatial variability in the response differently trade offs between different approaches may be mitigated with model stacking this paper describes the application of a variety of ml methods used to predict monthly average precipitation and two meter temperature using physics based predictions ensemble forecasts and observational data such as relative humidity pressure at sea level or geopotential height two weeks in advance for the whole continental united states regression quantile regression and tercile classification tasks using linear models random forests convolutional neural networks and stacked models are considered the proposed models outperform common baselines such as historical averages or quantiles and ensemble averages or quantiles this paper further includes an investigation of feature importance trade offs between using the full ensemble or only the ensemble average and different modes of accounting for spatial variability distributed energy management and demand response in smart grids a multi agent deep reinforcement learning framework authors amin shojaeighadikolaei arman ghasemi kailani jones yousif dafalla alexandru g bardas reza ahmadi morteza haashemi subjects multiagent systems cs ma machine learning cs lg systems and control eess sy arxiv link pdf link abstract this paper presents a multi agent deep reinforcement learning drl framework for autonomous control and integration of renewable energy resources into smart power grid systems in particular the proposed framework jointly considers demand response dr and distributed energy management dem for residential end users dr has a widely recognized potential for improving power grid stability and reliability while at the same time reducing end users energy bills however the conventional dr techniques come with several shortcomings such as the inability to handle operational uncertainties while incurring end user disutility which prevents widespread adoption in real world applications the proposed framework addresses these shortcomings by implementing dr and dem based on real time pricing strategy that is achieved using deep reinforcement learning furthermore this framework enables the power grid service provider to leverage distributed energy resources i e pv rooftop panels and battery storage as dispatchable assets to support the smart grid during peak hours thus achieving management of distributed energy resources simulation results based on the deep q network dqn demonstrate significant improvements of the hour accumulative profit for both prosumers and the power grid service provider as well as major reductions in the utilization of the power grid reserve generators an extreme adaptive time series prediction model based on probability enhanced lstm neural networks authors yanhong li jack xu david c anastasiu subjects machine learning cs lg artificial intelligence cs ai arxiv link pdf link abstract forecasting time series with extreme events has been a challenging and prevalent research topic especially when the time series data are affected by complicated uncertain factors such as is the case in hydrologic prediction diverse traditional and deep learning models have been applied to discover the nonlinear relationships and recognize the complex patterns in these types of data however existing methods usually ignore the negative influence of imbalanced data or severe events on model training moreover methods are usually evaluated on a small number of generally well behaved time series which does not show their ability to generalize to tackle these issues we propose a novel probability enhanced neural network model called nec which concurrently learns extreme and normal prediction functions and a way to choose among them via selective back propagation we evaluate the proposed model on the difficult day ahead hourly water level prediction task applied to reservoirs in california experimental results demonstrate that the proposed model significantly outperforms state of the art baselines and exhibits superior generalization ability on data with diverse distributions finlay thames dufay and paget color screen process collections using digital registration of viewing screens to reveal original color authors geoffrey barker jan hubička mark jacobs linda kimrová kendra meyer doug peterson subjects graphics cs gr multimedia cs mm arxiv link pdf link abstract we discuss digitization subsequent digital analysis and processing of negatives and diapositives made by finlay thames dufay paget and similar additive color screen processes these early color processes introduced in the and popular until the used a special color screen filter and a monochromatic negative due to poor stability of dyes used to produce color screens many of the photographs appear faded others exist only in the form of monochromatic negatives we discuss the possibility of digitally reconstructing the original color from scans of original negatives or by virtue of infrared imaging of original transparencies which eliminates the physically coupled color filters and digitally recreating the original color filter pattern using a new open source software tool photographs taken using additive color screen processes are some of the very earliest color images of our shared cultural heritage they depict people places and events for which there are no other surviving color images we hope that our new software tool can bring these images back to life g cmp graph enhanced contextual matrix profile for unsupervised anomaly detection in sensor based remote health monitoring authors nivedita bijlani oscar mendez maldonado samaneh kouchaki subjects machine learning cs lg artificial intelligence cs ai arxiv link pdf link abstract sensor based remote health monitoring is used in industrial urban and healthcare settings to monitor ongoing operation of equipment and human health an important aim is to intervene early if anomalous events or adverse health is detected in the wild these anomaly detection approaches are challenged by noise label scarcity high dimensionality explainability and wide variability in operating environments the contextual matrix profile cmp is a configurable dimensional version of the matrix profile mp that uses the distance matrix of all subsequences of a time series to discover patterns and anomalies the cmp is shown to enhance the effectiveness of the mp and other sota methods at detecting visualising and interpreting true anomalies in noisy real world data from different domains it excels at zooming out and identifying temporal patterns at configurable time scales however the cmp does not address cross sensor information and cannot scale to high dimensional data we propose a novel self supervised graph based approach for temporal anomaly detection that works on context graphs generated from the cmp distance matrix the learned graph embeddings encode the anomalous nature of a time context in addition we evaluate other graph outlier algorithms for the same task given our pipeline is modular graph construction generation of graph embeddings and pattern recognition logic can all be chosen based on the specific pattern detection application we verified the effectiveness of graph based anomaly detection and compared it with the cmp and state of the art methods on two real world healthcare datasets with different anomalies our proposed method demonstrated better recall alert rate and generalisability physics informed neural network for dynamic stress prediction authors hamed bolandi gautam sreekumar xuyang li nizar lajnef vishnu naresh boddeti subjects machine learning cs lg arxiv link pdf link abstract structural failures are often caused by catastrophic events such as earthquakes and winds as a result it is crucial to predict dynamic stress distributions during highly disruptive events in real time currently available high fidelity methods such as finite element models fems suffer from their inherent high complexity therefore to reduce computational cost while maintaining accuracy a physics informed neural network pinn pinn stress model is proposed to predict the entire sequence of stress distribution based on finite element simulations using a partial differential equation pde solver using automatic differentiation we embed a pde into a deep neural network s loss function to incorporate information from measurements and pdes the pinn stress model can predict the sequence of stress distribution in almost real time and can generalize better than the model without pinn reasoning about promises in weak memory models with event structures extended version authors heike wehrheim lara bargmann brijesh dongol subjects logic in computer science cs lo programming languages cs pl arxiv link pdf link abstract modern processors such as and risc v allow executions in which independent instructions within a process may be reordered to cope with such phenomena so called promising semantics have been developed which permit threads to read values that have not yet been written each promise is a speculative update that is later validated fulfilled by an actual write promising semantics are operational providing a pathway for developing proof calculi in this paper we develop an incorrectness style logic resulting in a framework for reasoning about state reachability like incorrectness logic our assertions are underapproximating since the set of all valid promises are not known at the start of execution our logic uses event structures as assertions to compactly represent the ordering among events such as promised and fulfilled writes we prove soundness and completeness of our proof calculus and demonstrate its applicability by proving reachability properties of standard weak memory litmus tests keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awbisp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression post training quantization on diffusion models authors yuzhang shang zhihang yuan bin xie bingzhe wu yan yan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract denoising diffusion score based generative models have recently achieved significant accomplishments in generating realistic and diverse data these approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise unfortunately the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations which rely on cumbersome neural networks it prevents the diffusion models from being widely deployed especially on edge devices previous works accelerate the generation process of diffusion model dm via finding shorter yet effective sampling trajectories however they overlook the cost of noise estimation with a heavy network in every iteration in this work we accelerate generation from the perspective of compressing the noise estimation network due to the difficulty of retraining dms we exclude mainstream training aware compression paradigms and introduce post training quantization ptq into dm acceleration however the output distributions of noise estimation networks change with time step making previous ptq methods fail in dms since they are designed for single time step scenarios to devise a dm specific ptq method we explore ptq on dm in three aspects quantized operations calibration dataset and calibration metric we summarize and use several observations derived from all inclusive investigations to formulate our method which especially targets the unique multi time step structure of dms experimentally our method can directly quantize full precision dms into bit models while maintaining or even improving their performance in a training free manner importantly our method can serve as a plug and play module on other fast sampling methods e g ddim compressing cross lingual multi task models at qualtrics authors daniel campos daniel perry samir joshi yashmeet gambhir wei du zhengzheng xing aaron colak subjects computation and language cs cl machine learning cs lg arxiv link pdf link abstract experience management is an emerging business area where organizations focus on understanding the feedback of customers and employees in order to improve their end to end experiences this results in a unique set of machine learning problems to help understand how people feel discover issues they care about and find which actions need to be taken on data that are different in content and distribution from traditional nlp domains in this paper we present a case study of building text analysis applications that perform multiple classification tasks efficiently in languages in the nascent business area of experience management in order to scale up modern ml methods on experience data we leverage cross lingual and multi task modeling techniques to consolidate our models into a single deployment to avoid overhead we also make use of model compression and model distillation to reduce overall inference latency and hardware cost to the level acceptable for business needs while maintaining model prediction quality our findings show that multi task modeling improves task performance for a subset of experience management tasks in both xlm r and mbert architectures among the compressed architectures we explored we found that minilm achieved the best compression performance tradeoff our case study demonstrates a speedup of up to with average task degradation or speedup with degradation and estimated savings of over using the original full size model these results demonstrate a successful scaling up of text classification for the challenging new area of ml for experience management maximal atomic irredundant sets a usage based dataflow partitioning algorithm authors corentin ferry steven derrien sanjay rajopadhye subjects programming languages cs pl distributed parallel and cluster computing cs dc arxiv link pdf link abstract programs admitting a polyhedral representation can be transformed in many ways for locality and parallelism notably loop tiling data flow analysis can then compute dependence relations between iterations and between tiles when tiling is applied certain iteration wise dependences cross tile boundaries creating the need for inter tile data communication previous work computes it as the flow in and flow out sets of iteration tiles in this paper we propose a partitioning of the flow out of a tile into the maximal sets of iterations that are entirely consumed and incur no redundant storage or transfer the computation is described as an algorithm and performed on a selection of polyhedral programs we then suggest possible applications of this decomposition in compression and memory allocation trustless unknown order groups authors samuel dobson steven galbraith benjamin smith grace subjects cryptography and security cs cr number theory math nt arxiv link pdf link abstract groups of unknown order are of major interest due to their applications including time lock puzzles verifiable delay functions and accumulators in this paper we focus on trustless setup in this setting the most popular unknown order group construction is ideal class groups of imaginary quadratic fields we argue that the full impact of sutherland s generic group order algorithm has not been recognised in this context and show that group sizes currently being proposed in practice namely approximately bits do not meet the claimed security level instead we claim that random group orders should be at least bits to meet a bit security level for ideal class groups this leads to discriminants of around bits which are much larger than desirable one drawback of class groups is that current approaches require approximately log n bits to represent an element in a group of order n we provide two solutions to mitigate this blow up in the size of representations first we explain how an idea of bleichenbacher can be used to compress class group elements to log n bits second we note that using jacobians of hyperelliptic curves in other words class groups of quadratic function fields allows efficient compression to the optimal element representation size of log n bits we discuss point counting approaches for hyperelliptic curves and argue that genus curves are secure in the trustless unknown order setting we conclude that in practice jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level both in the group operation and in the size of the element representation dba efficient transformer with dynamic bilinear low rank attention authors bosheng qin juncheng li siliang tang yueting zhuang subjects machine learning cs lg arxiv link pdf link abstract many studies have been conducted to improve the efficiency of transformer from quadric to linear among them the low rank based methods aim to learn the projection matrices to compress the sequence length however the projection matrices are fixed once they have been learned which compress sequence length with dedicated coefficients for tokens in the same position adopting such input invariant projections ignores the fact that the most informative part of a sequence varies from sequence to sequence thus failing to preserve the most useful information that lies in varied positions in addition previous efficient transformers only focus on the influence of sequence length while neglecting the effect of hidden state dimension to address the aforementioned problems we present an efficient yet effective attention mechanism namely the dynamic bilinear low rank attention dba which compresses the sequence length by input sensitive dynamic projection matrices and achieves linear time and space complexity by jointly optimizing the sequence length and hidden state dimension while maintaining state of the art performance specifically we first theoretically demonstrate that the sequence length can be compressed non destructively from a novel perspective of information theory with compression matrices dynamically determined by the input sequence furthermore we show that the hidden state dimension can be approximated by extending the johnson lindenstrauss lemma optimizing the attention in bilinear form theoretical analysis shows that dba is proficient in capturing high order relations in cross attention problems experiments over tasks with diverse sequence length conditions show that dba achieves state of the art performance compared with various strong baselines while maintaining less memory consumption with higher speed compressing volumetric radiance fields to mb authors lingzhi li zhen shen zhongshu wang li shen liefeng bo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract approximating radiance fields with volumetric grids is one of promising directions for improving nerf represented by methods like plenoxels and dvgo which achieve super fast training convergence and real time rendering however these methods typically require a tremendous storage overhead costing up to hundreds of megabytes of disk space and runtime memory for a single scene we address this issue in this paper by introducing a simple yet effective framework called vector quantized radiance fields vqrf for compressing these volume grid based radiance fields we first present a robust and adaptive metric for estimating redundancy in grid models and performing voxel pruning by better exploring intermediate outputs of volumetric rendering a trainable vector quantization is further proposed to improve the compactness of grid models in combination with an efficient joint tuning strategy and post processing our method can achieve a compression ratio of times by reducing the overall model size to mb with negligible loss on visual quality extensive experiments demonstrate that the proposed framework is capable of achieving unrivaled performance and well generalization across multiple methods with distinct volumetric structures facilitating the wide use of volumetric radiance fields methods in real world applications code available at url keyword raw learning visual planning models from partially observed images authors kebing jin zhanhao xiao hankui hankz zhuo hai wan jiaran cai subjects machine learning cs lg artificial intelligence cs ai computer vision and pattern recognition cs cv arxiv link pdf link abstract there has been increasing attention on planning model learning in classical planning most existing approaches however focus on learning planning models from structured data in symbolic representations it is often difficult to obtain such structured data in real world scenarios although a number of approaches have been developed for learning planning models from fully observed unstructured data e g images in many scenarios raw observations are often incomplete in this paper we provide a novel framework atype recplan for learning a transition model from partially observed raw image traces more specifically by considering the preceding and subsequent images in a trace we learn the latent state representations of raw observations and then build a transition model based on such representations additionally we propose a neural network based approach to learn a heuristic model that estimates the distance toward a given goal observation based on the learned transition model and heuristic model we implement a classical planner for images we exhibit empirically that our approach is more effective than a state of the art approach of learning visual planning models in the environment with incomplete observations deep semi supervised learning with double contrast of features and semantics authors quan feng jiayu yao zhison pan guojun zhou subjects machine learning cs lg arxiv link pdf link abstract in recent years the field of intelligent transportation systems its has achieved remarkable success which is mainly due to the large amount of available annotation data however obtaining these annotated data has to afford expensive costs in reality therefore a more realistic strategy is to leverage semi supervised learning ssl with a small amount of labeled data and a large amount of unlabeled data typically semantic consistency regularization and the two stage learning methods of decoupling feature extraction and classification have been proven effective nevertheless representation learning only limited to semantic consistency regularization may not guarantee the separation or discriminability of representations of samples with different semantics due to the inherent limitations of the two stage learning methods the extracted features may not match the specific downstream tasks in order to deal with the above drawbacks this paper proposes an end to end deep semi supervised learning double contrast of semantic and feature which extracts effective tasks specific discriminative features by contrasting the semantics features of positive and negative augmented samples pairs moreover we leverage information theory to explain the rationality of double contrast of semantics and features and slack mutual information to contrastive loss in a simpler way finally the effectiveness of our method is verified in benchmark datasets superpoint transformer for scene instance segmentation authors jiahao sun chunmei qing junpeng tan xiangmin xu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract most existing methods realize instance segmentation by extending those models used for object detection or semantic segmentation however these non straightforward methods suffer from two drawbacks imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall instance segmentation framework existing method requires a time consuming intermediate step of aggregation to address these issues this paper proposes a novel end to end instance segmentation method based on superpoint transformer named as spformer it groups potential features from point clouds into superpoints and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation the key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross attention mechanism and generate the superpoint masks of the instances through bipartite matching based on superpoint masks spformer can implement the network training without the intermediate aggregation step which accelerates the network extensive experiments on and benchmarks verify that our method is concise yet efficient notably spformer exceeds compared state of the art methods by on hidden test set in terms of map and keeps fast inference speed per frame simultaneously code is available at billion web documents with rich information authors arnold overwijk chenyan xiong xiao liu cameron vandenberg jamie callan subjects information retrieval cs ir artificial intelligence cs ai computation and language cs cl arxiv link pdf link abstract the newest iteration of the clueweb line of datasets provides billion web pages affiliated with rich information its design was influenced by the need for a high quality large scale web corpus to support a range of academic and industry research for example in information systems retrieval augmented ai systems and model pretraining compared with earlier clueweb corpora the corpus is larger more varied of higher quality and aligned with the document distributions in commercial web search besides raw html includes rich information about the web pages provided by industry standard document understanding systems including the visual representation of pages rendered by a web browser parsed html structure information from a neural network parser and pre processed cleaned document text to lower the barrier to entry many of these signals have been widely used in industry but are available to the research community for the first time at this scale neural feature adaptation for symbolic predictions using pre training and semantic loss authors vedant shah aditya agrawal lovekesh vig ashwin srinivasan gautam shroff tanmay verlekar subjects artificial intelligence cs ai machine learning cs lg logic in computer science cs lo arxiv link pdf link abstract we are interested in neurosymbolic systems consisting of a high level symbolic layer for explainable prediction in terms of human intelligible concepts and a low level neural layer for extracting symbols required to generate the symbolic explanation real data is often imperfect meaning that even if the symbolic theory remains unchanged we may still need to address the problem of mapping raw data to high level symbols each time there is a change in the data acquisition environment or equipment manual re annotation of the raw data each time this happens is laborious and expensive and automated labelling methods are often imperfect especially for complex problems neurolog proposed the use of a semantic loss function that allows an existing feature based symbolic model to guide the extraction of feature values from raw data using abduction however the experiments demonstrating the use of semantic loss through abduction appear to rely heavily on a domain specific pre processing step that enables a prior delineation of feature locations in the raw data we examine the use of semantic loss in domains where such pre processing is not possible or is not obvious we show that without any prior information about the features the neurolog approach can continue to predict accurately even with substantially incorrect feature predictions we show also that prior information about the features in the form of even imperfect pre training can help correct this situation these findings are replicated on the original problem considered by neurolog without the use of feature delineation this suggests that symbolic explanations constructed for data in a domain could be re used in a related domain by feature adaptation of pre trained neural extractors using the semantic loss function constrained by abductive feedback behavior estimation from multi source data for offline reinforcement learning authors guoxi zhang hisashi kashima subjects machine learning cs lg robotics cs ro arxiv link pdf link abstract offline reinforcement learning rl have received rising interest due to its appealing data efficiency the present study addresses behavior estimation a task that lays the foundation of many offline rl algorithms behavior estimation aims at estimating the policy with which training data are generated in particular this work considers a scenario where the data are collected from multiple sources in this case neglecting data heterogeneity existing approaches for behavior estimation suffers from behavior misspecification to overcome this drawback the present study proposes a latent variable model to infer a set of policies from data which allows an agent to use as behavior policy the policy that best describes a particular trajectory this model provides with a agent fine grained characterization for multi source data and helps it overcome behavior misspecification this work also proposes a learning algorithm for this model and illustrates its practical usage via extending an existing offline rl algorithm lastly with extensive evaluation this work confirms the existence of behavior misspecification and the efficacy of the proposed model peculiarities of gender disambiguation and ordering of non english authors names for economic papers beyond core databases authors o mryglod s nazarovets s kozmenko subjects digital libraries cs dl arxiv link pdf link abstract this paper presents the results of further exploration of crossref data related to ukrainian economics research the first part can be found in our purpose is to supplement the quantitative portrait of ukrainian economics discipline with the results of gender and author ordering analysis at the level of individual authors special methods of working with bibliographic data with a predominant share of non english authors are used the properties of gender mixing the likelihood of male and female authors occupying the first position in the authorship list as well as the arrangements of names are studied a data set containing bibliographic records related to ukrainian journal publications in the field of economics is constructed using crossref metadata the described stages for working with such specific data help to work at the level of authors and analyse in particular gender issues despite the larger number of female authors gender equality is more likely to be reported at the individual level for the discipline of ukrainian economics the tendencies towards collaborative or solo publications and gender mixing patterns are found to be dependent on the journal the differences for publications indexed in scopus and or web of science databases are found it has also been found that ukrainian economics research is characterized by rather a non alphabetical order of authors to our knowledge this is the first large scale quantitative study of ukrainian economic discipline the results obtained are valuable not only at the national level but also contribute to general knowledge about economic research gender issues and authors names ordering here for the first time attention is drawn to the explicit use of the features of the slavic authors names trustless unknown order groups authors samuel dobson steven galbraith benjamin smith grace subjects cryptography and security cs cr number theory math nt arxiv link pdf link abstract groups of unknown order are of major interest due to their applications including time lock puzzles verifiable delay functions and accumulators in this paper we focus on trustless setup in this setting the most popular unknown order group construction is ideal class groups of imaginary quadratic fields we argue that the full impact of sutherland s generic group order algorithm has not been recognised in this context and show that group sizes currently being proposed in practice namely approximately bits do not meet the claimed security level instead we claim that random group orders should be at least bits to meet a bit security level for ideal class groups this leads to discriminants of around bits which are much larger than desirable one drawback of class groups is that current approaches require approximately log n bits to represent an element in a group of order n we provide two solutions to mitigate this blow up in the size of representations first we explain how an idea of bleichenbacher can be used to compress class group elements to log n bits second we note that using jacobians of hyperelliptic curves in other words class groups of quadratic function fields allows efficient compression to the optimal element representation size of log n bits we discuss point counting approaches for hyperelliptic curves and argue that genus curves are secure in the trustless unknown order setting we conclude that in practice jacobians of hyperelliptic curves are more efficient in practice than ideal class groups at the same security level both in the group operation and in the size of the element representation adaenlight energy aware low light video stream enhancement on mobile devices authors sicong liu northwestern polytechnical university china xiaochen li northwestern polytechnical university china zimu zhou city university of hong kong china bin guo northwestern polytechnical university china meng zhang northwestern polytechnical university china haochen shen northwestern polytechnical university china zhiwen yu northwestern polytechnical university china subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the ubiquity of camera embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications these applications often demand on device processing of video streams to deliver real time high quality services for privacy and robustness concerns however the performance of these applications is constrained by the raw video streams which tend to be taken with small aperture cameras of ubiquitous mobile platforms in dim light despite extensive low light video enhancement solutions they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets in this paper we propose adaenlight an energy aware low light video stream enhancement system on mobile devices it achieves real time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform imposed dynamic energy budgets we report extensive experiments on diverse datasets scenarios and platforms and demonstrate the superiority of adaenlight compared with state of the art low light image and video enhancement solutions few shot query focused summarization with prefix merging authors ruifeng yuan zili wang ziqiang cao wenjie li subjects computation and language cs cl artificial intelligence cs ai arxiv link pdf link abstract query focused summarization has been considered as an important extension for text summarization it aims to generate a concise highlight for a given query different from text summarization query focused summarization has long been plagued by the problem of lacking high quality large scale datasets in this paper we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few shot learning in query focused summarization here we propose prefix merging a prefix based pretraining strategy for few shot learning in query focused summarization drawn inspiration from prefix tuning we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query focused summarization with only a small amount of trainable parameters prefix merging outperforms fine tuning on query focused summarization we further discuss the influence of different prefix designs and propose a visualized explanation for how prefix merging works datid diversity preserved domain adaptation using text to image diffusion for generative model authors gwanghyun kim se young chun subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract recent generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed shapes but training them for diverse domains is challenging since it requires massive training images and their camera distribution information text guided domain adaptation methods have shown impressive performance on converting the generative model on one domain into the models on other domains with different styles by leveraging the clip contrastive language image pre training rather than collecting massive datasets for those domains however one drawback of them is that the sample diversity in the original generative model is not well preserved in the domain adapted generative models due to the deterministic nature of the clip text encoder text guided domain adaptation will be even more challenging for generative models not only because of catastrophic diversity loss but also because of inferior text image correspondence and poor image quality here we propose datid a domain adaptation method tailored for generative models using text to image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain unlike extensions of prior text guided domain adaptation methods our novel pipeline was able to fine tune the state of the art generator of the source domain to synthesize high resolution multi view consistent images in text guided targeted domains without additional data outperforming the existing text guided domain adaptation methods in diversity and text image correspondence furthermore we propose and demonstrate diverse image manipulations such as one shot instance selected adaptation and single view manipulated reconstruction to fully enjoy diversity in text symmetry detection in trajectory data for more meaningful reinforcement learning representations authors marissa d alonzo rebecca russell subjects machine learning cs lg artificial intelligence cs ai robotics cs ro arxiv link pdf link abstract knowledge of the symmetries of reinforcement learning rl systems can be used to create compressed and semantically meaningful representations of a low level state space we present a method of automatically detecting rl symmetries directly from raw trajectory data without requiring active control of the system our method generates candidate symmetries and trains a recurrent neural network rnn to discriminate between the original trajectories and the transformed trajectories for each candidate symmetry the rnn discriminator s accuracy for each candidate reveals how symmetric the system is under that transformation this information can be used to create high level representations that are invariant to all symmetries on a dataset level and to communicate properties of the rl behavior to users we show in experiments on two simulated rl use cases a pusher robot and a uav flying in wind that our method can determine the symmetries underlying both the environment physics and the trained rl policy abstract visual reasoning with tangram shapes authors anya ji noriyuki kojima noah rush alane suhr wai keen vong robert d hawkins yoav artzi subjects computation and language cs cl artificial intelligence cs ai computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract we introduce kilogram a resource for studying abstract visual reasoning in humans and machines drawing on the history of tangram puzzles as stimuli in cognitive science we build a richly annotated dataset that with distinct stimuli is orders of magnitude larger and more diverse than prior resources it is both visually and linguistically richer moving beyond whole shape descriptions to include segmentation maps and part labels we use this resource to evaluate the abstract visual reasoning capacities of recent multi modal models we observe that pre trained weights demonstrate limited abstract reasoning which dramatically improves with fine tuning we also observe that explicitly describing parts aids abstract reasoning for both humans and models especially when jointly encoding the linguistic and visual inputs kilogram is available at keyword raw image learning visual planning models from partially observed images authors kebing jin zhanhao xiao hankui hankz zhuo hai wan jiaran cai subjects machine learning cs lg artificial intelligence cs ai computer vision and pattern recognition cs cv arxiv link pdf link abstract there has been increasing attention on planning model learning in classical planning most existing approaches however focus on learning planning models from structured data in symbolic representations it is often difficult to obtain such structured data in real world scenarios although a number of approaches have been developed for learning planning models from fully observed unstructured data e g images in many scenarios raw observations are often incomplete in this paper we provide a novel framework atype recplan for learning a transition model from partially observed raw image traces more specifically by considering the preceding and subsequent images in a trace we learn the latent state representations of raw observations and then build a transition model based on such representations additionally we propose a neural network based approach to learn a heuristic model that estimates the distance toward a given goal observation based on the learned transition model and heuristic model we implement a classical planner for images we exhibit empirically that our approach is more effective than a state of the art approach of learning visual planning models in the environment with incomplete observations
1
242,854
26,277,856,074
IssuesEvent
2023-01-07 01:20:31
gavarasana/ps-flux
https://api.github.com/repos/gavarasana/ps-flux
opened
CVE-2022-0155 (Medium) detected in follow-redirects-1.13.1.tgz
security vulnerability
## CVE-2022-0155 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.13.1.tgz</b></p></summary> <p>HTTP and HTTPS modules that follow redirects.</p> <p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.13.1.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.13.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p> <p> Dependency Hierarchy: - react-scripts-4.0.1.tgz (Root Library) - webpack-dev-server-3.11.0.tgz - http-proxy-middleware-0.19.1.tgz - http-proxy-1.18.1.tgz - :x: **follow-redirects-1.13.1.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor <p>Publish Date: 2022-01-10 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0155>CVE-2022-0155</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p> <p>Release Date: 2022-01-10</p> <p>Fix Resolution (follow-redirects): 1.14.7</p> <p>Direct dependency fix Resolution (react-scripts): 4.0.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-0155 (Medium) detected in follow-redirects-1.13.1.tgz - ## CVE-2022-0155 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.13.1.tgz</b></p></summary> <p>HTTP and HTTPS modules that follow redirects.</p> <p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.13.1.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.13.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p> <p> Dependency Hierarchy: - react-scripts-4.0.1.tgz (Root Library) - webpack-dev-server-3.11.0.tgz - http-proxy-middleware-0.19.1.tgz - http-proxy-1.18.1.tgz - :x: **follow-redirects-1.13.1.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor <p>Publish Date: 2022-01-10 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-0155>CVE-2022-0155</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p> <p>Release Date: 2022-01-10</p> <p>Fix Resolution (follow-redirects): 1.14.7</p> <p>Direct dependency fix Resolution (react-scripts): 4.0.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in follow redirects tgz cve medium severity vulnerability vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file package json path to vulnerable library node modules follow redirects package json dependency hierarchy react scripts tgz root library webpack dev server tgz http proxy middleware tgz http proxy tgz x follow redirects tgz vulnerable library found in base branch main vulnerability details follow redirects is vulnerable to exposure of private personal information to an unauthorized actor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects direct dependency fix resolution react scripts step up your open source security game with mend
0
10,377
13,193,250,720
IssuesEvent
2020-08-13 14:57:29
prisma/language-tools
https://api.github.com/repos/prisma/language-tools
closed
Detect users VSCode theme and suggest to use a different one if necessary
kind/improvement process/candidate
The minimal themes do not use all the root groups from the textmate grammer used for syntax highlighting, leading to a bad experience for users. We could detect the theme and create a toast telling people about this.
1.0
Detect users VSCode theme and suggest to use a different one if necessary - The minimal themes do not use all the root groups from the textmate grammer used for syntax highlighting, leading to a bad experience for users. We could detect the theme and create a toast telling people about this.
process
detect users vscode theme and suggest to use a different one if necessary the minimal themes do not use all the root groups from the textmate grammer used for syntax highlighting leading to a bad experience for users we could detect the theme and create a toast telling people about this
1
750,326
26,198,039,329
IssuesEvent
2023-01-03 15:07:04
GiPHouse/Website
https://api.github.com/repos/GiPHouse/Website
closed
Use better email slugs
priority:low
### One-sentence description Use better email slugs ### How to reproduce the bug Current mail aliases are very long and confusing ### Expected behaviour Think about how to handle archiving of email groups, users re-entering them, etc.
1.0
Use better email slugs - ### One-sentence description Use better email slugs ### How to reproduce the bug Current mail aliases are very long and confusing ### Expected behaviour Think about how to handle archiving of email groups, users re-entering them, etc.
non_process
use better email slugs one sentence description use better email slugs how to reproduce the bug current mail aliases are very long and confusing expected behaviour think about how to handle archiving of email groups users re entering them etc
0
11,148
13,957,693,042
IssuesEvent
2020-10-24 08:10:52
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
SE: Harvesting Request
Geoportal Harvesting process SE - Sweden
Hi! Are the servers still down or is it posible to harvest from the Swedish node? Regards Bj&ouml;rn Olofsson, support, the Swedish Geo postal
1.0
SE: Harvesting Request - Hi! Are the servers still down or is it posible to harvest from the Swedish node? Regards Bj&ouml;rn Olofsson, support, the Swedish Geo postal
process
se harvesting request hi are the servers still down or is it posible to harvest from the swedish node regards bj ouml rn olofsson support the swedish geo postal
1
82,267
15,884,024,981
IssuesEvent
2021-04-09 18:15:02
google/iree
https://api.github.com/repos/google/iree
closed
Investigate LinalgVectorizationPass causing ConvertToLLVMTo fail
codegen/llvm
With https://github.com/google/iree/pull/5362 LinalgVectorizationPass is cauing bert_encoder_unroled_fake_weigths to fail ConvertToLLVM with ` %1268 = "llvm.fcmp"(%1267, %6) {fastmathFlags = #llvm.fastmath<>, predicate = 2 : i64} : (!llvm.array<64 x vector<64xf32>>, !llvm.array<64 x vector<64xf32>>) -> !llvm.array<64 x vector<64xi1>> ` Add more snippets
1.0
Investigate LinalgVectorizationPass causing ConvertToLLVMTo fail - With https://github.com/google/iree/pull/5362 LinalgVectorizationPass is cauing bert_encoder_unroled_fake_weigths to fail ConvertToLLVM with ` %1268 = "llvm.fcmp"(%1267, %6) {fastmathFlags = #llvm.fastmath<>, predicate = 2 : i64} : (!llvm.array<64 x vector<64xf32>>, !llvm.array<64 x vector<64xf32>>) -> !llvm.array<64 x vector<64xi1>> ` Add more snippets
non_process
investigate linalgvectorizationpass causing converttollvmto fail with linalgvectorizationpass is cauing bert encoder unroled fake weigths to fail converttollvm with llvm fcmp fastmathflags llvm fastmath predicate llvm array llvm array llvm array add more snippets
0
20,129
26,664,917,426
IssuesEvent
2023-01-26 02:00:07
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Thu, 26 Jan 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### Few-Shot Learning Enables Population-Scale Analysis of Leaf Traits in Populus trichocarpa - **Authors:** John Lagergren, Mirko Pavicic, Hari B. Chhetri, Larry M. York, P. Doug Hyatt, David Kainer, Erica M. Rutter, Kevin Flores, Gail Taylor, Daniel Jacobson, Jared Streich - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Quantitative Methods (q-bio.QM) - **Arxiv link:** https://arxiv.org/abs/2301.10351 - **Pdf link:** https://arxiv.org/pdf/2301.10351 - **Abstract** Plant phenotyping is typically a time-consuming and expensive endeavor, requiring large groups of researchers to meticulously measure biologically relevant plant traits, and is the main bottleneck in understanding plant adaptation and the genetic architecture underlying complex traits at population scale. In this work, we address these challenges by leveraging few-shot learning with convolutional neural networks (CNNs) to segment the leaf body and visible venation of 2,906 P. trichocarpa leaf images obtained in the field. In contrast to previous methods, our approach (i) does not require experimental or image pre-processing, (ii) uses the raw RGB images at full resolution, and (iii) requires very few samples for training (e.g., just eight images for vein segmentation). Traits relating to leaf morphology and vein topology are extracted from the resulting segmentations using traditional open-source image-processing tools, validated using real-world physical measurements, and used to conduct a genome-wide association study to identify genes controlling the traits. In this way, the current work is designed to provide the plant phenotyping community with (i) methods for fast and accurate image-based feature extraction that require minimal training data, and (ii) a new population-scale data set, including 68 different leaf phenotypes, for domain scientists and machine learning researchers. All of the few-shot learning code, data, and results are made publicly available. ### Efficient Flow-Guided Multi-frame De-fencing - **Authors:** Stavros Tsogkas, Fengjia Zhang, Allan Jepson, Alex Levinshtein - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.10759 - **Pdf link:** https://arxiv.org/pdf/2301.10759 - **Abstract** Taking photographs ''in-the-wild'' is often hindered by fence obstructions that stand between the camera user and the scene of interest, and which are hard or impossible to avoid. De-fencing is the algorithmic process of automatically removing such obstructions from images, revealing the invisible parts of the scene. While this problem can be formulated as a combination of fence segmentation and image inpainting, this often leads to implausible hallucinations of the occluded regions. Existing multi-frame approaches rely on propagating information to a selected keyframe from its temporal neighbors, but they are often inefficient and struggle with alignment of severely obstructed images. In this work we draw inspiration from the video completion literature and develop a simplified framework for multi-frame de-fencing that computes high quality flow maps directly from obstructed frames and uses them to accurately align frames. Our primary focus is efficiency and practicality in a real-world setting: the input to our algorithm is a short image burst (5 frames) - a data modality commonly available in modern smartphones - and the output is a single reconstructed keyframe, with the fence removed. Our approach leverages simple yet effective CNN modules, trained on carefully generated synthetic data, and outperforms more complicated alternatives real bursts, both quantitatively and qualitatively, while running real-time. ## Keyword: raw image There is no result
2.0
New submissions for Thu, 26 Jan 23 - ## Keyword: events There is no result ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### Few-Shot Learning Enables Population-Scale Analysis of Leaf Traits in Populus trichocarpa - **Authors:** John Lagergren, Mirko Pavicic, Hari B. Chhetri, Larry M. York, P. Doug Hyatt, David Kainer, Erica M. Rutter, Kevin Flores, Gail Taylor, Daniel Jacobson, Jared Streich - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Quantitative Methods (q-bio.QM) - **Arxiv link:** https://arxiv.org/abs/2301.10351 - **Pdf link:** https://arxiv.org/pdf/2301.10351 - **Abstract** Plant phenotyping is typically a time-consuming and expensive endeavor, requiring large groups of researchers to meticulously measure biologically relevant plant traits, and is the main bottleneck in understanding plant adaptation and the genetic architecture underlying complex traits at population scale. In this work, we address these challenges by leveraging few-shot learning with convolutional neural networks (CNNs) to segment the leaf body and visible venation of 2,906 P. trichocarpa leaf images obtained in the field. In contrast to previous methods, our approach (i) does not require experimental or image pre-processing, (ii) uses the raw RGB images at full resolution, and (iii) requires very few samples for training (e.g., just eight images for vein segmentation). Traits relating to leaf morphology and vein topology are extracted from the resulting segmentations using traditional open-source image-processing tools, validated using real-world physical measurements, and used to conduct a genome-wide association study to identify genes controlling the traits. In this way, the current work is designed to provide the plant phenotyping community with (i) methods for fast and accurate image-based feature extraction that require minimal training data, and (ii) a new population-scale data set, including 68 different leaf phenotypes, for domain scientists and machine learning researchers. All of the few-shot learning code, data, and results are made publicly available. ### Efficient Flow-Guided Multi-frame De-fencing - **Authors:** Stavros Tsogkas, Fengjia Zhang, Allan Jepson, Alex Levinshtein - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.10759 - **Pdf link:** https://arxiv.org/pdf/2301.10759 - **Abstract** Taking photographs ''in-the-wild'' is often hindered by fence obstructions that stand between the camera user and the scene of interest, and which are hard or impossible to avoid. De-fencing is the algorithmic process of automatically removing such obstructions from images, revealing the invisible parts of the scene. While this problem can be formulated as a combination of fence segmentation and image inpainting, this often leads to implausible hallucinations of the occluded regions. Existing multi-frame approaches rely on propagating information to a selected keyframe from its temporal neighbors, but they are often inefficient and struggle with alignment of severely obstructed images. In this work we draw inspiration from the video completion literature and develop a simplified framework for multi-frame de-fencing that computes high quality flow maps directly from obstructed frames and uses them to accurately align frames. Our primary focus is efficiency and practicality in a real-world setting: the input to our algorithm is a short image burst (5 frames) - a data modality commonly available in modern smartphones - and the output is a single reconstructed keyframe, with the fence removed. Our approach leverages simple yet effective CNN modules, trained on carefully generated synthetic data, and outperforms more complicated alternatives real bursts, both quantitatively and qualitatively, while running real-time. ## Keyword: raw image There is no result
process
new submissions for thu jan keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw few shot learning enables population scale analysis of leaf traits in populus trichocarpa authors john lagergren mirko pavicic hari b chhetri larry m york p doug hyatt david kainer erica m rutter kevin flores gail taylor daniel jacobson jared streich subjects computer vision and pattern recognition cs cv quantitative methods q bio qm arxiv link pdf link abstract plant phenotyping is typically a time consuming and expensive endeavor requiring large groups of researchers to meticulously measure biologically relevant plant traits and is the main bottleneck in understanding plant adaptation and the genetic architecture underlying complex traits at population scale in this work we address these challenges by leveraging few shot learning with convolutional neural networks cnns to segment the leaf body and visible venation of p trichocarpa leaf images obtained in the field in contrast to previous methods our approach i does not require experimental or image pre processing ii uses the raw rgb images at full resolution and iii requires very few samples for training e g just eight images for vein segmentation traits relating to leaf morphology and vein topology are extracted from the resulting segmentations using traditional open source image processing tools validated using real world physical measurements and used to conduct a genome wide association study to identify genes controlling the traits in this way the current work is designed to provide the plant phenotyping community with i methods for fast and accurate image based feature extraction that require minimal training data and ii a new population scale data set including different leaf phenotypes for domain scientists and machine learning researchers all of the few shot learning code data and results are made publicly available efficient flow guided multi frame de fencing authors stavros tsogkas fengjia zhang allan jepson alex levinshtein subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract taking photographs in the wild is often hindered by fence obstructions that stand between the camera user and the scene of interest and which are hard or impossible to avoid de fencing is the algorithmic process of automatically removing such obstructions from images revealing the invisible parts of the scene while this problem can be formulated as a combination of fence segmentation and image inpainting this often leads to implausible hallucinations of the occluded regions existing multi frame approaches rely on propagating information to a selected keyframe from its temporal neighbors but they are often inefficient and struggle with alignment of severely obstructed images in this work we draw inspiration from the video completion literature and develop a simplified framework for multi frame de fencing that computes high quality flow maps directly from obstructed frames and uses them to accurately align frames our primary focus is efficiency and practicality in a real world setting the input to our algorithm is a short image burst frames a data modality commonly available in modern smartphones and the output is a single reconstructed keyframe with the fence removed our approach leverages simple yet effective cnn modules trained on carefully generated synthetic data and outperforms more complicated alternatives real bursts both quantitatively and qualitatively while running real time keyword raw image there is no result
1
65,884
16,499,860,953
IssuesEvent
2021-05-25 13:42:24
gradle/gradle
https://api.github.com/repos/gradle/gradle
closed
Compare different setups of large/huge hierarchical Gradle projects
@idiomatic in:composite-builds in:multi-projects
Different setups to compare: 1. Build logic in _buildSrc_ or in an _included build_ 2. Main build in a single project with a hierarchy of subprojects 3. Main build as a hierarchy of included builds (requires build logic in _included build_) Feature wise: What are differences? What is better, what is worth? What are advantages of one over the other? Performance: How does performance of configuration time and IDE sync time compare? Build to use for comparison: - `gradle/gradle` build (using spike branch that turns it into a hierarchy) - Spring Boot build - Huge artificial build with thousands of projects
1.0
Compare different setups of large/huge hierarchical Gradle projects - Different setups to compare: 1. Build logic in _buildSrc_ or in an _included build_ 2. Main build in a single project with a hierarchy of subprojects 3. Main build as a hierarchy of included builds (requires build logic in _included build_) Feature wise: What are differences? What is better, what is worth? What are advantages of one over the other? Performance: How does performance of configuration time and IDE sync time compare? Build to use for comparison: - `gradle/gradle` build (using spike branch that turns it into a hierarchy) - Spring Boot build - Huge artificial build with thousands of projects
non_process
compare different setups of large huge hierarchical gradle projects different setups to compare build logic in buildsrc or in an included build main build in a single project with a hierarchy of subprojects main build as a hierarchy of included builds requires build logic in included build feature wise what are differences what is better what is worth what are advantages of one over the other performance how does performance of configuration time and ide sync time compare build to use for comparison gradle gradle build using spike branch that turns it into a hierarchy spring boot build huge artificial build with thousands of projects
0
4,159
7,104,525,950
IssuesEvent
2018-01-16 10:16:38
zotero/zotero
https://api.github.com/repos/zotero/zotero
closed
Styling in citation text editor not working
Word Processor Integration
I believe I may have seen a report on the forums of this before, and also https://github.com/zotero/zotero-libreoffice-integration/issues/35#issuecomment-357780732
1.0
Styling in citation text editor not working - I believe I may have seen a report on the forums of this before, and also https://github.com/zotero/zotero-libreoffice-integration/issues/35#issuecomment-357780732
process
styling in citation text editor not working i believe i may have seen a report on the forums of this before and also
1
74,881
25,381,584,720
IssuesEvent
2022-11-21 18:00:15
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
closed
Blank table header row in Drupal displays as blank on FE
Defect VA.gov frontend Needs refining ⭐️ Public Websites 508/Accessibility
## Describe the defect The Drupal table_field form UI states that the top row of a table can be left blank if the table doesn't have headers. However, when Editors leave the top row blank, the FE template still renders a header row, it's just blank - The header cells still have accessibility attributes like `scope="col"`, which would confuse the screen reader experience - It doesn't make sense to sighted users either - Mobile display doesn't have headers over each piece of data as intended Example: https://www.va.gov/houston-health-care/health-services/patient-advocates/ ![image.png](https://images.zenhubusercontent.com/61a671f5fc46c2a311655f75/3812fa14-c3c7-4efd-bc8f-f9c738715452) ## To Reproduce Steps to reproduce the behavior: 1. Create a blank table (or just look at CMS edit view for example page above) 2. See blank header row rendered on VAgov FE 3. Inspect elements to see accessibility attributes on blank header cells ## AC / Expected behavior - [ ] Update CMS table functionality to require content in the top row - [ ] Update help text so that editors are not encouraged to leave the top row blank, rather they should be told that the top row will be the table header row ### CMS Team Please check the team(s) that will do this work. - [ ] `Program` - [ ] `Platform CMS Team` - [ ] `Sitewide Crew` - [ ] `⭐️ Sitewide CMS` - [x] `⭐️ Public Websites` - [ ] `⭐️ Facilities` - [ ] `⭐️ User support`
1.0
Blank table header row in Drupal displays as blank on FE - ## Describe the defect The Drupal table_field form UI states that the top row of a table can be left blank if the table doesn't have headers. However, when Editors leave the top row blank, the FE template still renders a header row, it's just blank - The header cells still have accessibility attributes like `scope="col"`, which would confuse the screen reader experience - It doesn't make sense to sighted users either - Mobile display doesn't have headers over each piece of data as intended Example: https://www.va.gov/houston-health-care/health-services/patient-advocates/ ![image.png](https://images.zenhubusercontent.com/61a671f5fc46c2a311655f75/3812fa14-c3c7-4efd-bc8f-f9c738715452) ## To Reproduce Steps to reproduce the behavior: 1. Create a blank table (or just look at CMS edit view for example page above) 2. See blank header row rendered on VAgov FE 3. Inspect elements to see accessibility attributes on blank header cells ## AC / Expected behavior - [ ] Update CMS table functionality to require content in the top row - [ ] Update help text so that editors are not encouraged to leave the top row blank, rather they should be told that the top row will be the table header row ### CMS Team Please check the team(s) that will do this work. - [ ] `Program` - [ ] `Platform CMS Team` - [ ] `Sitewide Crew` - [ ] `⭐️ Sitewide CMS` - [x] `⭐️ Public Websites` - [ ] `⭐️ Facilities` - [ ] `⭐️ User support`
non_process
blank table header row in drupal displays as blank on fe describe the defect the drupal table field form ui states that the top row of a table can be left blank if the table doesn t have headers however when editors leave the top row blank the fe template still renders a header row it s just blank the header cells still have accessibility attributes like scope col which would confuse the screen reader experience it doesn t make sense to sighted users either mobile display doesn t have headers over each piece of data as intended example to reproduce steps to reproduce the behavior create a blank table or just look at cms edit view for example page above see blank header row rendered on vagov fe inspect elements to see accessibility attributes on blank header cells ac expected behavior update cms table functionality to require content in the top row update help text so that editors are not encouraged to leave the top row blank rather they should be told that the top row will be the table header row cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support
0
46,279
13,154,345,770
IssuesEvent
2020-08-10 06:30:52
raindigi/site-landing
https://api.github.com/repos/raindigi/site-landing
opened
CVE-2020-7661 (High) detected in url-regex-3.2.0.tgz
security vulnerability
## CVE-2020-7661 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-regex-3.2.0.tgz</b></p></summary> <p>Regular expression for matching URLs</p> <p>Library home page: <a href="https://registry.npmjs.org/url-regex/-/url-regex-3.2.0.tgz">https://registry.npmjs.org/url-regex/-/url-regex-3.2.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/site-landing/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/site-landing/node_modules/url-regex/package.json</p> <p> Dependency Hierarchy: - gatsby-plugin-sharp-2.0.32.tgz (Root Library) - potrace-2.1.1.tgz - jimp-0.2.28.tgz - :x: **url-regex-3.2.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/raindigi/site-landing/commit/16b26718d0664b1bcc170228749438299b7e65c2">16b26718d0664b1bcc170228749438299b7e65c2</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> all versions of url-regex are vulnerable to Regular Expression Denial of Service. An attacker providing a very long string in String.test can cause a Denial of Service. <p>Publish Date: 2020-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7661>CVE-2020-7661</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7661 (High) detected in url-regex-3.2.0.tgz - ## CVE-2020-7661 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-regex-3.2.0.tgz</b></p></summary> <p>Regular expression for matching URLs</p> <p>Library home page: <a href="https://registry.npmjs.org/url-regex/-/url-regex-3.2.0.tgz">https://registry.npmjs.org/url-regex/-/url-regex-3.2.0.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/site-landing/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/site-landing/node_modules/url-regex/package.json</p> <p> Dependency Hierarchy: - gatsby-plugin-sharp-2.0.32.tgz (Root Library) - potrace-2.1.1.tgz - jimp-0.2.28.tgz - :x: **url-regex-3.2.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/raindigi/site-landing/commit/16b26718d0664b1bcc170228749438299b7e65c2">16b26718d0664b1bcc170228749438299b7e65c2</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> all versions of url-regex are vulnerable to Regular Expression Denial of Service. An attacker providing a very long string in String.test can cause a Denial of Service. <p>Publish Date: 2020-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7661>CVE-2020-7661</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in url regex tgz cve high severity vulnerability vulnerable library url regex tgz regular expression for matching urls library home page a href path to dependency file tmp ws scm site landing package json path to vulnerable library tmp ws scm site landing node modules url regex package json dependency hierarchy gatsby plugin sharp tgz root library potrace tgz jimp tgz x url regex tgz vulnerable library found in head commit a href vulnerability details all versions of url regex are vulnerable to regular expression denial of service an attacker providing a very long string in string test can cause a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource
0
183,375
14,226,691,180
IssuesEvent
2020-11-17 23:33:16
cocotb/cocotb
https://api.github.com/repos/cocotb/cocotb
closed
GHA: update conda-incubator/setup-miniconda
category:OS:Windows category:tests-ci
`::add-path` and `::set-env` have been disabled: https://github.com/cocotb/cocotb/pull/2200/checks?check_run_id=1409789127#step:4:25 Update to [conda-incubator/setup-miniconda@v2](https://github.com/marketplace/actions/setup-miniconda) and get it working: ```yaml - name: Set up Anaconda uses: conda-incubator/setup-miniconda@v2.0.0 ```
1.0
GHA: update conda-incubator/setup-miniconda - `::add-path` and `::set-env` have been disabled: https://github.com/cocotb/cocotb/pull/2200/checks?check_run_id=1409789127#step:4:25 Update to [conda-incubator/setup-miniconda@v2](https://github.com/marketplace/actions/setup-miniconda) and get it working: ```yaml - name: Set up Anaconda uses: conda-incubator/setup-miniconda@v2.0.0 ```
non_process
gha update conda incubator setup miniconda add path and set env have been disabled update to and get it working yaml name set up anaconda uses conda incubator setup miniconda
0
8,499
11,661,648,685
IssuesEvent
2020-03-03 07:20:58
yalla-coop/accountability
https://api.github.com/repos/yalla-coop/accountability
closed
Code formatting and linting
Priority 1 fib-1 process website
As agreed at #37 we'll be using fairly standard **eslint** for linting and **prettier** for formatting * [x] Configure eslint (see https://www.gatsbyjs.org/docs/eslint/) * [x] Add and configure [prettier](https://www.gatsbyjs.org/docs/eslint/) * [x] Leave issue open and we'll review setup and see if we're happy with the set of rules next week
1.0
Code formatting and linting - As agreed at #37 we'll be using fairly standard **eslint** for linting and **prettier** for formatting * [x] Configure eslint (see https://www.gatsbyjs.org/docs/eslint/) * [x] Add and configure [prettier](https://www.gatsbyjs.org/docs/eslint/) * [x] Leave issue open and we'll review setup and see if we're happy with the set of rules next week
process
code formatting and linting as agreed at we ll be using fairly standard eslint for linting and prettier for formatting configure eslint see add and configure leave issue open and we ll review setup and see if we re happy with the set of rules next week
1
6,447
9,546,273,854
IssuesEvent
2019-05-01 19:27:23
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Apply: Add Optional beside Experience header
Apply Process Approved Requirements Ready State Dept.
Who: Internship applicants What: Should not be required to enter work experience Why: because they might not have any On the Experiences & References page of the application, experience is not required and I can click on Save & continue with no experience. Please add Optional beside the experience header. (not in mock) ![image.png](https://images.zenhubusercontent.com/59ee08f1a468affe6df7cd6f/972d16f0-e6e6-4de3-97fa-d841f4c2bfe5)
1.0
Apply: Add Optional beside Experience header - Who: Internship applicants What: Should not be required to enter work experience Why: because they might not have any On the Experiences & References page of the application, experience is not required and I can click on Save & continue with no experience. Please add Optional beside the experience header. (not in mock) ![image.png](https://images.zenhubusercontent.com/59ee08f1a468affe6df7cd6f/972d16f0-e6e6-4de3-97fa-d841f4c2bfe5)
process
apply add optional beside experience header who internship applicants what should not be required to enter work experience why because they might not have any on the experiences references page of the application experience is not required and i can click on save continue with no experience please add optional beside the experience header not in mock
1
601,539
18,415,808,665
IssuesEvent
2021-10-13 11:18:13
googleapis/python-compute
https://api.github.com/repos/googleapis/python-compute
opened
samples.snippets.test_sample_start_stop: test_instance_operations failed
type: bug priority: p1 flakybot: issue
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 7a9e8324e08c46a93050908760b2b5aca054a863 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/92b2bcf9-84ba-4b4c-97cf-12dbf5977f8b), [Sponge](http://sponge2/92b2bcf9-84ba-4b4c-97cf-12dbf5977f8b) status: failed <details><summary>Test output</summary><br><pre>@pytest.fixture def compute_instance(): disk = _make_disk() request = _make_request(disk) > instance = _create_instance(request) test_sample_start_stop.py:103: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_sample_start_stop.py:79: in _create_instance operation = instance_client.insert(request=request) ../../google/cloud/compute_v1/services/instances/client.py:1971: in insert response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) .nox/py-3-9/lib/python3.9/site-packages/google/api_core/gapic_v1/method.py:142: in __call__ return wrapped_func(*args, **kwargs) .nox/py-3-9/lib/python3.9/site-packages/google/api_core/grpc_helpers.py:66: in error_remapped_callable return callable_(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.compute_v1.services.instances.transports.rest.InstancesRestTransport object at 0x7f9d5b8c67c0> request = zone: "europe-central2-b" instance_resource { name: "ia2ab669739" network_interfaces { name: "default" } d...: true } machine_type: "zones/europe-central2-b/machineTypes/e2-micro" } project: "python-docs-samples-tests-py39" def insert( self, request: compute.InsertInstanceRequest, *, metadata: Sequence[Tuple[str, str]] = (), ) -> compute.Operation: r"""Call the insert method over HTTP. Args: request (~.compute.InsertInstanceRequest): The request object. A request message for Instances.Insert. See the method description for details. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: ~.compute.Operation: Represents an Operation resource. Google Compute Engine has three Operation resources: - `Global </compute/docs/reference/rest/{$api_version}/globalOperations>`__ \* `Regional </compute/docs/reference/rest/{$api_version}/regionOperations>`__ \* `Zonal </compute/docs/reference/rest/{$api_version}/zoneOperations>`__ You can use an operation resource to manage asynchronous API requests. For more information, read Handling API responses. Operations can be global, regional or zonal. - For global operations, use the ``globalOperations`` resource. - For regional operations, use the ``regionOperations`` resource. - For zonal operations, use the ``zonalOperations`` resource. For more information, read Global, Regional, and Zonal Resources. (== resource_for {$api_version}.globalOperations ==) (== resource_for {$api_version}.regionOperations ==) (== resource_for {$api_version}.zoneOperations ==) """ # Jsonify the request body body = compute.Instance.to_json( request.instance_resource, including_default_value_fields=False, use_integers_for_enums=False, ) # TODO(yon-mg): need to handle grpc transcoding and parse url correctly # current impl assumes basic case of grpc transcoding url = "https://{host}/compute/v1/projects/{project}/zones/{zone}/instances".format( host=self._host, project=request.project, zone=request.zone, ) # TODO(yon-mg): handle nested fields corerctly rather than using only top level fields # not required for GCE query_params = {} if compute.InsertInstanceRequest.request_id in request: query_params["requestId"] = request.request_id if compute.InsertInstanceRequest.source_instance_template in request: query_params["sourceInstanceTemplate"] = request.source_instance_template # Send the request headers = dict(metadata) headers["Content-Type"] = "application/json" response = self._session.post( url, headers=headers, params=query_params, data=body, ) # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception # subclass. if response.status_code >= 400: > raise core_exceptions.from_http_response(response) E google.api_core.exceptions.Forbidden: 403 POST https://compute.googleapis.com:443/compute/v1/projects/python-docs-samples-tests-py39/zones/europe-central2-b/instances: Compute Engine API has not been used in project 890447421168 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=890447421168 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. ../../google/cloud/compute_v1/services/instances/transports/rest.py:1244: Forbidden</pre></details>
1.0
samples.snippets.test_sample_start_stop: test_instance_operations failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: 7a9e8324e08c46a93050908760b2b5aca054a863 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/92b2bcf9-84ba-4b4c-97cf-12dbf5977f8b), [Sponge](http://sponge2/92b2bcf9-84ba-4b4c-97cf-12dbf5977f8b) status: failed <details><summary>Test output</summary><br><pre>@pytest.fixture def compute_instance(): disk = _make_disk() request = _make_request(disk) > instance = _create_instance(request) test_sample_start_stop.py:103: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_sample_start_stop.py:79: in _create_instance operation = instance_client.insert(request=request) ../../google/cloud/compute_v1/services/instances/client.py:1971: in insert response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) .nox/py-3-9/lib/python3.9/site-packages/google/api_core/gapic_v1/method.py:142: in __call__ return wrapped_func(*args, **kwargs) .nox/py-3-9/lib/python3.9/site-packages/google/api_core/grpc_helpers.py:66: in error_remapped_callable return callable_(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.compute_v1.services.instances.transports.rest.InstancesRestTransport object at 0x7f9d5b8c67c0> request = zone: "europe-central2-b" instance_resource { name: "ia2ab669739" network_interfaces { name: "default" } d...: true } machine_type: "zones/europe-central2-b/machineTypes/e2-micro" } project: "python-docs-samples-tests-py39" def insert( self, request: compute.InsertInstanceRequest, *, metadata: Sequence[Tuple[str, str]] = (), ) -> compute.Operation: r"""Call the insert method over HTTP. Args: request (~.compute.InsertInstanceRequest): The request object. A request message for Instances.Insert. See the method description for details. metadata (Sequence[Tuple[str, str]]): Strings which should be sent along with the request as metadata. Returns: ~.compute.Operation: Represents an Operation resource. Google Compute Engine has three Operation resources: - `Global </compute/docs/reference/rest/{$api_version}/globalOperations>`__ \* `Regional </compute/docs/reference/rest/{$api_version}/regionOperations>`__ \* `Zonal </compute/docs/reference/rest/{$api_version}/zoneOperations>`__ You can use an operation resource to manage asynchronous API requests. For more information, read Handling API responses. Operations can be global, regional or zonal. - For global operations, use the ``globalOperations`` resource. - For regional operations, use the ``regionOperations`` resource. - For zonal operations, use the ``zonalOperations`` resource. For more information, read Global, Regional, and Zonal Resources. (== resource_for {$api_version}.globalOperations ==) (== resource_for {$api_version}.regionOperations ==) (== resource_for {$api_version}.zoneOperations ==) """ # Jsonify the request body body = compute.Instance.to_json( request.instance_resource, including_default_value_fields=False, use_integers_for_enums=False, ) # TODO(yon-mg): need to handle grpc transcoding and parse url correctly # current impl assumes basic case of grpc transcoding url = "https://{host}/compute/v1/projects/{project}/zones/{zone}/instances".format( host=self._host, project=request.project, zone=request.zone, ) # TODO(yon-mg): handle nested fields corerctly rather than using only top level fields # not required for GCE query_params = {} if compute.InsertInstanceRequest.request_id in request: query_params["requestId"] = request.request_id if compute.InsertInstanceRequest.source_instance_template in request: query_params["sourceInstanceTemplate"] = request.source_instance_template # Send the request headers = dict(metadata) headers["Content-Type"] = "application/json" response = self._session.post( url, headers=headers, params=query_params, data=body, ) # In case of error, raise the appropriate core_exceptions.GoogleAPICallError exception # subclass. if response.status_code >= 400: > raise core_exceptions.from_http_response(response) E google.api_core.exceptions.Forbidden: 403 POST https://compute.googleapis.com:443/compute/v1/projects/python-docs-samples-tests-py39/zones/europe-central2-b/instances: Compute Engine API has not been used in project 890447421168 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=890447421168 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. ../../google/cloud/compute_v1/services/instances/transports/rest.py:1244: Forbidden</pre></details>
non_process
samples snippets test sample start stop test instance operations failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output pytest fixture def compute instance disk make disk request make request disk instance create instance request test sample start stop py test sample start stop py in create instance operation instance client insert request request google cloud compute services instances client py in insert response rpc request retry retry timeout timeout metadata metadata nox py lib site packages google api core gapic method py in call return wrapped func args kwargs nox py lib site packages google api core grpc helpers py in error remapped callable return callable args kwargs self request zone europe b instance resource name network interfaces name default d true machine type zones europe b machinetypes micro project python docs samples tests def insert self request compute insertinstancerequest metadata sequence compute operation r call the insert method over http args request compute insertinstancerequest the request object a request message for instances insert see the method description for details metadata sequence strings which should be sent along with the request as metadata returns compute operation represents an operation resource google compute engine has three operation resources global regional zonal you can use an operation resource to manage asynchronous api requests for more information read handling api responses operations can be global regional or zonal for global operations use the globaloperations resource for regional operations use the regionoperations resource for zonal operations use the zonaloperations resource for more information read global regional and zonal resources resource for api version globaloperations resource for api version regionoperations resource for api version zoneoperations jsonify the request body body compute instance to json request instance resource including default value fields false use integers for enums false todo yon mg need to handle grpc transcoding and parse url correctly current impl assumes basic case of grpc transcoding url host self host project request project zone request zone todo yon mg handle nested fields corerctly rather than using only top level fields not required for gce query params if compute insertinstancerequest request id in request query params request request id if compute insertinstancerequest source instance template in request query params request source instance template send the request headers dict metadata headers application json response self session post url headers headers params query params data body in case of error raise the appropriate core exceptions googleapicallerror exception subclass if response status code raise core exceptions from http response response e google api core exceptions forbidden post compute engine api has not been used in project before or it is disabled enable it by visiting then retry if you enabled this api recently wait a few minutes for the action to propagate to our systems and retry google cloud compute services instances transports rest py forbidden
0
12,960
15,340,668,323
IssuesEvent
2021-02-27 08:15:18
topcoder-platform/community-app
https://api.github.com/repos/topcoder-platform/community-app
opened
Clear filter issue when used along with recommended challenges toggle
P3 ShapeupProcess challenge- recommender-tool
1. Go to Challenge listings page, open for registration bucket 2. Switch on recommended challenges toggle 3. Add any other filter, example switch off development track 4. Note down my challenges count 5. Now click on clear filter 6. Note down My challenges count 7. go to my challenges / all challenges bucket expected: my challenge count must be reset to what it was before applying the filters. My Challenges/ All Challenges must not be filtered actual: my challenge count is not reset to what it was before applying the filters. My Challenges/ All Challenges are still filtered https://user-images.githubusercontent.com/58783823/109381681-c9a69d80-7901-11eb-9e15-ab306b062d14.mov
1.0
Clear filter issue when used along with recommended challenges toggle - 1. Go to Challenge listings page, open for registration bucket 2. Switch on recommended challenges toggle 3. Add any other filter, example switch off development track 4. Note down my challenges count 5. Now click on clear filter 6. Note down My challenges count 7. go to my challenges / all challenges bucket expected: my challenge count must be reset to what it was before applying the filters. My Challenges/ All Challenges must not be filtered actual: my challenge count is not reset to what it was before applying the filters. My Challenges/ All Challenges are still filtered https://user-images.githubusercontent.com/58783823/109381681-c9a69d80-7901-11eb-9e15-ab306b062d14.mov
process
clear filter issue when used along with recommended challenges toggle go to challenge listings page open for registration bucket switch on recommended challenges toggle add any other filter example switch off development track note down my challenges count now click on clear filter note down my challenges count go to my challenges all challenges bucket expected my challenge count must be reset to what it was before applying the filters my challenges all challenges must not be filtered actual my challenge count is not reset to what it was before applying the filters my challenges all challenges are still filtered
1
19,845
26,245,528,788
IssuesEvent
2023-01-05 14:58:49
celo-org/celo-monorepo
https://api.github.com/repos/celo-org/celo-monorepo
closed
Shepherd CIP51 through CIP process
release-process
This involves shepherding [CIP51](https://github.com/celo-org/celo-proposals/blob/master/CIPs/cip-0051.md) through the CIP process described here: - [Celo Improvement Proposal (CIP) Process](https://github.com/celo-org/celo-proposals/blob/master/CIPs/cip-0000.md) Useful links: - Github: [CIP51: Federated Attestations Protocol](https://github.com/celo-org/celo-proposals/blob/master/CIPs/cip-0051.md) - Forum: [CIP51: Federated Attestations Protocol [Discussion]](https://forum.celo.org/t/cip51-federated-attestations-protocol-discussion/3942)
1.0
Shepherd CIP51 through CIP process - This involves shepherding [CIP51](https://github.com/celo-org/celo-proposals/blob/master/CIPs/cip-0051.md) through the CIP process described here: - [Celo Improvement Proposal (CIP) Process](https://github.com/celo-org/celo-proposals/blob/master/CIPs/cip-0000.md) Useful links: - Github: [CIP51: Federated Attestations Protocol](https://github.com/celo-org/celo-proposals/blob/master/CIPs/cip-0051.md) - Forum: [CIP51: Federated Attestations Protocol [Discussion]](https://forum.celo.org/t/cip51-federated-attestations-protocol-discussion/3942)
process
shepherd through cip process this involves shepherding through the cip process described here useful links github forum
1
12,336
14,882,735,846
IssuesEvent
2021-01-20 12:20:23
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] Studies list > Study status is not updated for 'Not Eligible' post withdrawing
Bug P2 Process: Fixed Process: Tested dev iOS
**Scenario 1:** Study status is not updated for 'Not Eligible' post withdrawing for open study Steps: 1. Login and enroll into any open study successfully 2. Withdraw from the study 3. Again click on the withdrawn study 4. Fail the eligibility test 5. Observe the study status **Scenario 2:** Study status is not updated for 'Not Eligible' post withdrawing for closed study having eligibility test Steps: 1. Login and enroll into any closed study having eligibility test successfully 2. Withdraw from the study 3. PM admin enables and send the invitation again in the same site 4. From mobile click on the withdrawn study 5. Fail the eligibility test 6. Observe the study status Actual: Study status is currently showing as 'WIthdrawn' Expected: Study status should be updated to 'Not Eligible' post withdrawing
2.0
[iOS] Studies list > Study status is not updated for 'Not Eligible' post withdrawing - **Scenario 1:** Study status is not updated for 'Not Eligible' post withdrawing for open study Steps: 1. Login and enroll into any open study successfully 2. Withdraw from the study 3. Again click on the withdrawn study 4. Fail the eligibility test 5. Observe the study status **Scenario 2:** Study status is not updated for 'Not Eligible' post withdrawing for closed study having eligibility test Steps: 1. Login and enroll into any closed study having eligibility test successfully 2. Withdraw from the study 3. PM admin enables and send the invitation again in the same site 4. From mobile click on the withdrawn study 5. Fail the eligibility test 6. Observe the study status Actual: Study status is currently showing as 'WIthdrawn' Expected: Study status should be updated to 'Not Eligible' post withdrawing
process
studies list study status is not updated for not eligible post withdrawing scenario study status is not updated for not eligible post withdrawing for open study steps login and enroll into any open study successfully withdraw from the study again click on the withdrawn study fail the eligibility test observe the study status scenario study status is not updated for not eligible post withdrawing for closed study having eligibility test steps login and enroll into any closed study having eligibility test successfully withdraw from the study pm admin enables and send the invitation again in the same site from mobile click on the withdrawn study fail the eligibility test observe the study status actual study status is currently showing as withdrawn expected study status should be updated to not eligible post withdrawing
1
47,676
10,140,931,257
IssuesEvent
2019-08-03 09:05:30
atomist-blogs/org-visualizer
https://api.github.com/repos/atomist-blogs/org-visualizer
closed
Code Inspection: Tslint on explorer
code-inspection
### max-line-length - [`views/sunburstPage.tsx:104`](https://github.com/atomist-blogs/org-visualizer/blob/dc73d0d5fe49510ad616076987e5607a961200de/views/sunburstPage.tsx#L104): _(warn)_ Exceeds maximum line length of 150 [atomist:code-inspection:explorer=@atomist/atomist-sdm]
1.0
Code Inspection: Tslint on explorer - ### max-line-length - [`views/sunburstPage.tsx:104`](https://github.com/atomist-blogs/org-visualizer/blob/dc73d0d5fe49510ad616076987e5607a961200de/views/sunburstPage.tsx#L104): _(warn)_ Exceeds maximum line length of 150 [atomist:code-inspection:explorer=@atomist/atomist-sdm]
non_process
code inspection tslint on explorer max line length warn exceeds maximum line length of
0
427,816
12,399,369,717
IssuesEvent
2020-05-21 05:00:25
open-learning-exchange/planet
https://api.github.com/repos/open-learning-exchange/planet
closed
HealthEvent: Allow decimals
priority
I think the built in Angular number validator maybe defaults to 0 decimal places. If it allows a value for precision, let's go with 2.
1.0
HealthEvent: Allow decimals - I think the built in Angular number validator maybe defaults to 0 decimal places. If it allows a value for precision, let's go with 2.
non_process
healthevent allow decimals i think the built in angular number validator maybe defaults to decimal places if it allows a value for precision let s go with
0
111
2,546,157,955
IssuesEvent
2015-01-29 21:53:32
tinkerpop/tinkerpop3
https://api.github.com/repos/tinkerpop/tinkerpop3
closed
Are DedupStep and FoldStep SideEffectSteps? (proposal)
enhancement process
I just introduced the notion of `Reducing` (marker interface) and steps that require a reduce function to operate. However, if these steps are SideEffectSteps, then they should each have their own XXXMapReduce. * **DedupStep**: This can be a sideEffect in both OLTP and OLAP. The sideEffect data structure is the HashSet. In fact, its identical to `Aggregate(Set)`. * **FoldStep**: This can be a sideEffect where the seed is the sideEffect data structure. Each incoming object mutates the seed. However, it needs to lock in OLTP (`BarrierStep` ... but we need "LazyBarrierStep" so its memory efficient). In OLAP, it would be like an identity function, but the foldFunction mutates the seed and then its cap'd. However, this is not general enough as you would have multiple parallel seeds and thus, OLAP could be different from OLTP. If we state in the docs that parallel execution must be semantically correct in the foldFunciton, we have the other problem that the muliple fold'd data stuctures would need a combiner to generate a single fold data structure.......... The next question becomes. Are these worth of being sideEffects? I mean, people don't care about the sideEffect data........................................... it doesn't really matter if its there, but its not necessary.
1.0
Are DedupStep and FoldStep SideEffectSteps? (proposal) - I just introduced the notion of `Reducing` (marker interface) and steps that require a reduce function to operate. However, if these steps are SideEffectSteps, then they should each have their own XXXMapReduce. * **DedupStep**: This can be a sideEffect in both OLTP and OLAP. The sideEffect data structure is the HashSet. In fact, its identical to `Aggregate(Set)`. * **FoldStep**: This can be a sideEffect where the seed is the sideEffect data structure. Each incoming object mutates the seed. However, it needs to lock in OLTP (`BarrierStep` ... but we need "LazyBarrierStep" so its memory efficient). In OLAP, it would be like an identity function, but the foldFunction mutates the seed and then its cap'd. However, this is not general enough as you would have multiple parallel seeds and thus, OLAP could be different from OLTP. If we state in the docs that parallel execution must be semantically correct in the foldFunciton, we have the other problem that the muliple fold'd data stuctures would need a combiner to generate a single fold data structure.......... The next question becomes. Are these worth of being sideEffects? I mean, people don't care about the sideEffect data........................................... it doesn't really matter if its there, but its not necessary.
process
are dedupstep and foldstep sideeffectsteps proposal i just introduced the notion of reducing marker interface and steps that require a reduce function to operate however if these steps are sideeffectsteps then they should each have their own xxxmapreduce dedupstep this can be a sideeffect in both oltp and olap the sideeffect data structure is the hashset in fact its identical to aggregate set foldstep this can be a sideeffect where the seed is the sideeffect data structure each incoming object mutates the seed however it needs to lock in oltp barrierstep but we need lazybarrierstep so its memory efficient in olap it would be like an identity function but the foldfunction mutates the seed and then its cap d however this is not general enough as you would have multiple parallel seeds and thus olap could be different from oltp if we state in the docs that parallel execution must be semantically correct in the foldfunciton we have the other problem that the muliple fold d data stuctures would need a combiner to generate a single fold data structure the next question becomes are these worth of being sideeffects i mean people don t care about the sideeffect data it doesn t really matter if its there but its not necessary
1
271,215
20,625,457,139
IssuesEvent
2022-03-07 21:59:36
barrucadu/resolved
https://api.github.com/repos/barrucadu/resolved
opened
Document how configuration works
documentation
This is kind of a placeholder issue until zone files are implemented and I'm distributing the root hints and RFC 6761 stuff.
1.0
Document how configuration works - This is kind of a placeholder issue until zone files are implemented and I'm distributing the root hints and RFC 6761 stuff.
non_process
document how configuration works this is kind of a placeholder issue until zone files are implemented and i m distributing the root hints and rfc stuff
0
21,012
27,947,701,312
IssuesEvent
2023-03-24 05:35:19
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
Support `*.HEIC` HEIF format images in Processing `ImportPhotosAlgorithm` (Request in QGIS)
Processing Alg 3.32
### Request for documentation From pull request QGIS/qgis#51973 Author: @shuckc QGIS version: 3.32 **Support `*.HEIC` HEIF format images in Processing `ImportPhotosAlgorithm`** ### PR Description: Minor changes to allow importing `*.heic` images with GDAL's HEIF image support. Firstly we add the file extension to our filter. I wanted to make this conditional on the GDAL driver being available, but our provider layer for GDAL does not seem to allow querying driver availability. Once the GDAL source is open, multiple metadata domains are available, we need to explicitly read the `EXIF` domain explicitly to find the usual GPS tags. Using a local build I confirmed with multiple images from an iOS device are imported, all the fields parse OK, attribute table looks good and markers plot correctly. The 'Map Tooltips' preview unfortunately cannot show thumbnails from HEIF format files, that would need some further investigation. First time contributor. Closes #51961 ### Commits tagged with [need-docs] or [FEATURE]
1.0
Support `*.HEIC` HEIF format images in Processing `ImportPhotosAlgorithm` (Request in QGIS) - ### Request for documentation From pull request QGIS/qgis#51973 Author: @shuckc QGIS version: 3.32 **Support `*.HEIC` HEIF format images in Processing `ImportPhotosAlgorithm`** ### PR Description: Minor changes to allow importing `*.heic` images with GDAL's HEIF image support. Firstly we add the file extension to our filter. I wanted to make this conditional on the GDAL driver being available, but our provider layer for GDAL does not seem to allow querying driver availability. Once the GDAL source is open, multiple metadata domains are available, we need to explicitly read the `EXIF` domain explicitly to find the usual GPS tags. Using a local build I confirmed with multiple images from an iOS device are imported, all the fields parse OK, attribute table looks good and markers plot correctly. The 'Map Tooltips' preview unfortunately cannot show thumbnails from HEIF format files, that would need some further investigation. First time contributor. Closes #51961 ### Commits tagged with [need-docs] or [FEATURE]
process
support heic heif format images in processing importphotosalgorithm request in qgis request for documentation from pull request qgis qgis author shuckc qgis version support heic heif format images in processing importphotosalgorithm pr description minor changes to allow importing heic images with gdal s heif image support firstly we add the file extension to our filter i wanted to make this conditional on the gdal driver being available but our provider layer for gdal does not seem to allow querying driver availability once the gdal source is open multiple metadata domains are available we need to explicitly read the exif domain explicitly to find the usual gps tags using a local build i confirmed with multiple images from an ios device are imported all the fields parse ok attribute table looks good and markers plot correctly the map tooltips preview unfortunately cannot show thumbnails from heif format files that would need some further investigation first time contributor closes commits tagged with or
1
64,972
3,222,437,100
IssuesEvent
2015-10-09 01:00:08
bioinformatics-ua/catalogue
https://api.github.com/repos/bioinformatics-ua/catalogue
opened
[Redesign operations] Try to open fingerprint and receive redirect to a blank page "Error open edit for fingerprint 49"
bug high priority
In demo server: 1. Search for ieeta 2. Go to ieeta database dummy test 3. Go to questionset 12. 4. You will see a blank page saying "Error open edit for fingerprint 49"
1.0
[Redesign operations] Try to open fingerprint and receive redirect to a blank page "Error open edit for fingerprint 49" - In demo server: 1. Search for ieeta 2. Go to ieeta database dummy test 3. Go to questionset 12. 4. You will see a blank page saying "Error open edit for fingerprint 49"
non_process
try to open fingerprint and receive redirect to a blank page error open edit for fingerprint in demo server search for ieeta go to ieeta database dummy test go to questionset you will see a blank page saying error open edit for fingerprint
0
9,111
12,193,192,438
IssuesEvent
2020-04-29 14:04:00
prisma/prisma2-docs
https://api.github.com/repos/prisma/prisma2-docs
closed
Links with on startpage are problematic when opened as `/docs` (no trailing slash)
process/candidate topic: broken links
If you open the startpage as `/docs` (note the missing trailing slash) all the links with `./` do not work. These probably need to be absolute. See part of the source code here: ``` stroke-linejoin="round"></path><path d="M1.5 11.6666H15.5" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M6.75 1L5 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M12 1L10.25 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path></svg>Getting started</a></h3><p class="paragraph">The <strong>Getting started</strong> section contains <em>practical guides</em> to help you get started with Prisma.</p><ul class="list"><li><a href="./getting-started/quickstart">Quickstart</a> (5 min)</li><li>Setup Prisma<ul class="list"><li><a href="./getting-started/setup-prisma/add-to-existing-project">Add Prisma to an existing project</a> (15 min)</li><li><a href="./getting-started/setup-prisma/start-from-scratch-sql-migrations">Start from scratch (SQL migrations)</a> (15 min)</li></ul></li></ul><hr/></section><section><h3 id="understand-prisma"><a class="headings__A-sc-13lpp34-1 kYzUFs title-link" href="#understand-prisma"><svg width="17" height="18" viewBox="0 0 17 18" fill="none" xmlns="http://www.w3.org/2000/svg" class="headings__StyledAnchor-sc-13lpp34-0 gMuomf"><path d="M1.5 6.33337H15.5" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M1.5 11.6666H15.5" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M6.75 1L5 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M12 1L10.25 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path></svg>Understand Prisma</a></h3><p class="paragraph">The <strong>Understand Prisma</strong> section contains <em>conceptual</em> information about Prisma.</p><ul class="list"><li><a href="./understand-prisma/introduction">Introduction</a></li><li><a href="./understand-prisma/why-prisma">Why Prisma?</a></li><li>Prisma in your stack<ul class="list"><li><a href="./understand-prisma/prisma-in-your-stack/rest">REST</a></li><li><a href="./understand-prisma/prisma-in-your-stack/graphql">GraphQL</a></li><li><a href="./understand-prisma/prisma-in-your-stack/is-prisma-an-orm">Is Prisma an ORM?</a></li></ul></li><li><a href="./understand-prisma/data-modeling">Data modeling</a></li><li><a href="./understand-prisma/under-the-hood">Under the hood</a></li></ul><hr/></section><section><h3 id="reference"><a class="headings__A-sc-13lpp34-1 kYzUFs title-link" href="#reference"><svg width="17" height="18" viewBox="0 0 17 18" fill="none" xmlns="http://www.w3.org/2000/svg" class="headings__StyledAnchor-sc-13lpp34-0 gMuomf"><path d="M1.5 6.33337H15.5" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M1.5 11.6666H15.5" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M6.75 1L5 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M12 1L10.25 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path></svg>Reference</a></h3><p class="paragraph">The <strong>Reference</strong> section contains <em>technical</em> information about Prisma.</p><ul class="list"><li>Tools and interfaces<ul class="list"><li>Prisma schema<ul class="list"><li><a href="./reference/tools-and-interfaces/prisma-schema/prisma-schema-file">Prisma schema file</a></li><li><a href="./reference/tools-and-interfaces/prisma-schema/data-sources">Data sources</a></li><li><a href="./reference/tools-and-interfaces/prisma-schema/generators">Generators</a></li><li><a href="./reference/tools-and-interfaces/prisma-schema/data-model">Data model</a></li><li><a href="./reference/tools-and-interfaces/prisma-schema/models">Models</a></li><li><a href="./reference/tools-and-interfaces/prisma-schema/relations">Relations</a></li></ul></li><li>Prisma Client<ul class="list"><li><a href="./reference/tools-and-interfaces/prisma-client/api">API reference</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/generating-prisma-client">Generating Prisma Client</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/configuring-the-prisma-client-api">Configuring the Prisma Client API</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/crud">CRUD</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/relation-queries">Relation queries</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/field-selection">Field selection</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/raw-database-access">Raw database access</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/connection-management">Connection management</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/advanced-usage-of-generated-types">Advanced usage of generated types</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/logging">Logging</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/debugging">Debugging</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/error-formatting">Error formatting</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/transactions">Transactions</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/deployment">Deployment</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/query-engine">Query engine</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/module- ```
1.0
Links with on startpage are problematic when opened as `/docs` (no trailing slash) - If you open the startpage as `/docs` (note the missing trailing slash) all the links with `./` do not work. These probably need to be absolute. See part of the source code here: ``` stroke-linejoin="round"></path><path d="M1.5 11.6666H15.5" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M6.75 1L5 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M12 1L10.25 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path></svg>Getting started</a></h3><p class="paragraph">The <strong>Getting started</strong> section contains <em>practical guides</em> to help you get started with Prisma.</p><ul class="list"><li><a href="./getting-started/quickstart">Quickstart</a> (5 min)</li><li>Setup Prisma<ul class="list"><li><a href="./getting-started/setup-prisma/add-to-existing-project">Add Prisma to an existing project</a> (15 min)</li><li><a href="./getting-started/setup-prisma/start-from-scratch-sql-migrations">Start from scratch (SQL migrations)</a> (15 min)</li></ul></li></ul><hr/></section><section><h3 id="understand-prisma"><a class="headings__A-sc-13lpp34-1 kYzUFs title-link" href="#understand-prisma"><svg width="17" height="18" viewBox="0 0 17 18" fill="none" xmlns="http://www.w3.org/2000/svg" class="headings__StyledAnchor-sc-13lpp34-0 gMuomf"><path d="M1.5 6.33337H15.5" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M1.5 11.6666H15.5" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M6.75 1L5 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M12 1L10.25 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path></svg>Understand Prisma</a></h3><p class="paragraph">The <strong>Understand Prisma</strong> section contains <em>conceptual</em> information about Prisma.</p><ul class="list"><li><a href="./understand-prisma/introduction">Introduction</a></li><li><a href="./understand-prisma/why-prisma">Why Prisma?</a></li><li>Prisma in your stack<ul class="list"><li><a href="./understand-prisma/prisma-in-your-stack/rest">REST</a></li><li><a href="./understand-prisma/prisma-in-your-stack/graphql">GraphQL</a></li><li><a href="./understand-prisma/prisma-in-your-stack/is-prisma-an-orm">Is Prisma an ORM?</a></li></ul></li><li><a href="./understand-prisma/data-modeling">Data modeling</a></li><li><a href="./understand-prisma/under-the-hood">Under the hood</a></li></ul><hr/></section><section><h3 id="reference"><a class="headings__A-sc-13lpp34-1 kYzUFs title-link" href="#reference"><svg width="17" height="18" viewBox="0 0 17 18" fill="none" xmlns="http://www.w3.org/2000/svg" class="headings__StyledAnchor-sc-13lpp34-0 gMuomf"><path d="M1.5 6.33337H15.5" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M1.5 11.6666H15.5" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M6.75 1L5 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path><path d="M12 1L10.25 17" stroke="#CBD5E0" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"></path></svg>Reference</a></h3><p class="paragraph">The <strong>Reference</strong> section contains <em>technical</em> information about Prisma.</p><ul class="list"><li>Tools and interfaces<ul class="list"><li>Prisma schema<ul class="list"><li><a href="./reference/tools-and-interfaces/prisma-schema/prisma-schema-file">Prisma schema file</a></li><li><a href="./reference/tools-and-interfaces/prisma-schema/data-sources">Data sources</a></li><li><a href="./reference/tools-and-interfaces/prisma-schema/generators">Generators</a></li><li><a href="./reference/tools-and-interfaces/prisma-schema/data-model">Data model</a></li><li><a href="./reference/tools-and-interfaces/prisma-schema/models">Models</a></li><li><a href="./reference/tools-and-interfaces/prisma-schema/relations">Relations</a></li></ul></li><li>Prisma Client<ul class="list"><li><a href="./reference/tools-and-interfaces/prisma-client/api">API reference</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/generating-prisma-client">Generating Prisma Client</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/configuring-the-prisma-client-api">Configuring the Prisma Client API</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/crud">CRUD</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/relation-queries">Relation queries</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/field-selection">Field selection</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/raw-database-access">Raw database access</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/connection-management">Connection management</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/advanced-usage-of-generated-types">Advanced usage of generated types</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/logging">Logging</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/debugging">Debugging</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/error-formatting">Error formatting</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/transactions">Transactions</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/deployment">Deployment</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/query-engine">Query engine</a></li><li><a href="./reference/tools-and-interfaces/prisma-client/module- ```
process
links with on startpage are problematic when opened as docs no trailing slash if you open the startpage as docs note the missing trailing slash all the links with do not work these probably need to be absolute see part of the source code here stroke linejoin round getting started the getting started section contains practical guides to help you get started with prisma quickstart min setup prisma add prisma to an existing project min start from scratch sql migrations min understand prisma the understand prisma section contains conceptual information about prisma introduction why prisma prisma in your stack rest graphql is prisma an orm data modeling under the hood reference the reference section contains technical information about prisma tools and interfaces prisma schema prisma schema file data sources generators data model models relations prisma client api reference generating prisma client configuring the prisma client api crud relation queries field selection raw database access connection management advanced usage of generated types logging debugging error formatting transactions deployment query engine a href reference tools and interfaces prisma client module
1
8,670
11,802,936,759
IssuesEvent
2020-03-18 22:47:34
pacificclimate/climate-explorer-data-prep
https://api.github.com/repos/pacificclimate/climate-explorer-data-prep
opened
Calculate anomaly data
process new data
We'd like to show maps in plan2adapt of variable anomalies. There's not really a good way to inject anomaly calculation into the ncWMS/leaflet pipeline, so we need to create some netCDF datasets that contain the anomaly values. We need the PCIC12 model for the 2020, 2050, and 2080 climatologies for tasmin, tasmax, tasmean, pr, hdd, gdd, prsn, and ffd. Requires #100 and #99
1.0
Calculate anomaly data - We'd like to show maps in plan2adapt of variable anomalies. There's not really a good way to inject anomaly calculation into the ncWMS/leaflet pipeline, so we need to create some netCDF datasets that contain the anomaly values. We need the PCIC12 model for the 2020, 2050, and 2080 climatologies for tasmin, tasmax, tasmean, pr, hdd, gdd, prsn, and ffd. Requires #100 and #99
process
calculate anomaly data we d like to show maps in of variable anomalies there s not really a good way to inject anomaly calculation into the ncwms leaflet pipeline so we need to create some netcdf datasets that contain the anomaly values we need the model for the and climatologies for tasmin tasmax tasmean pr hdd gdd prsn and ffd requires and
1
11,344
14,167,357,192
IssuesEvent
2020-11-12 10:10:59
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Proj.db not found for GDAL tools on QGIS 3.16 Mac all-in-one installer
Bug MacOS Processing Regression
gdalwarp give an error on QGIS3.16 Mac because it can't find proj.db. Same operation works fine in QGIS3.10 LTR package. "ERROR 1: PROJ: proj_create_from_database: Cannot find proj.db" <img width="1001" alt="proj_error" src="https://user-images.githubusercontent.com/5227506/98100658-2f849a80-1eb7-11eb-9d3d-06b45a3f9d42.png"> To reproduce 1. Unzip and Load Attached SRTM raster [N27E086.hgt.zip](https://github.com/qgis/QGIS/files/5487203/N27E086.hgt.zip) 2.Processing -> Toolbox -> Warp (reproject) 3. Set 'Target CRS' as 'EPSG:32645' and click 'Run'. 4. Operation fails. **QGIS and OS versions** QGIS version | 3.16.0-Hannover | QGIS code revision | 33475aa559 -- | -- | -- | -- Compiled against Qt | 5.14.2 | Running against Qt | 5.14.2 Compiled against GDAL/OGR | 3.1.2 | Running against GDAL/OGR | 3.1.2 Compiled against GEOS | 3.8.1-CAPI-1.13.3 | Running against GEOS | 3.8.1-CAPI-1.13.3 Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1 PostgreSQL Client Version | 12.3 | SpatiaLite Version | 4.3.0a QWT Version | 6.1.4 | QScintilla2 Version | 2.11.4 Compiled against PROJ | 6.3.2 | Running against PROJ | Rel. 6.3.2, May 1st, 2020 OS Version | macOS 10.15 Active python plugins | ORStools; slyr_community; QuickOSM; ee_plugin; profiletool; DataPlotly; quick_map_services; pathfinder; qdraw; minimal; GeoCoding; pluginbuilder3; show_time; latlontools; Qgis2threejs; qgsAzimuth; plugin_reloader; Mergin; processing; db_manager
1.0
Proj.db not found for GDAL tools on QGIS 3.16 Mac all-in-one installer - gdalwarp give an error on QGIS3.16 Mac because it can't find proj.db. Same operation works fine in QGIS3.10 LTR package. "ERROR 1: PROJ: proj_create_from_database: Cannot find proj.db" <img width="1001" alt="proj_error" src="https://user-images.githubusercontent.com/5227506/98100658-2f849a80-1eb7-11eb-9d3d-06b45a3f9d42.png"> To reproduce 1. Unzip and Load Attached SRTM raster [N27E086.hgt.zip](https://github.com/qgis/QGIS/files/5487203/N27E086.hgt.zip) 2.Processing -> Toolbox -> Warp (reproject) 3. Set 'Target CRS' as 'EPSG:32645' and click 'Run'. 4. Operation fails. **QGIS and OS versions** QGIS version | 3.16.0-Hannover | QGIS code revision | 33475aa559 -- | -- | -- | -- Compiled against Qt | 5.14.2 | Running against Qt | 5.14.2 Compiled against GDAL/OGR | 3.1.2 | Running against GDAL/OGR | 3.1.2 Compiled against GEOS | 3.8.1-CAPI-1.13.3 | Running against GEOS | 3.8.1-CAPI-1.13.3 Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1 PostgreSQL Client Version | 12.3 | SpatiaLite Version | 4.3.0a QWT Version | 6.1.4 | QScintilla2 Version | 2.11.4 Compiled against PROJ | 6.3.2 | Running against PROJ | Rel. 6.3.2, May 1st, 2020 OS Version | macOS 10.15 Active python plugins | ORStools; slyr_community; QuickOSM; ee_plugin; profiletool; DataPlotly; quick_map_services; pathfinder; qdraw; minimal; GeoCoding; pluginbuilder3; show_time; latlontools; Qgis2threejs; qgsAzimuth; plugin_reloader; Mergin; processing; db_manager
process
proj db not found for gdal tools on qgis mac all in one installer gdalwarp give an error on mac because it can t find proj db same operation works fine in ltr package error proj proj create from database cannot find proj db img width alt proj error src to reproduce unzip and load attached srtm raster processing toolbox warp reproject set target crs as epsg and click run operation fails qgis and os versions qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version macos active python plugins orstools slyr community quickosm ee plugin profiletool dataplotly quick map services pathfinder qdraw minimal geocoding show time latlontools qgsazimuth plugin reloader mergin processing db manager
1
40,359
2,868,673,686
IssuesEvent
2015-06-05 20:19:39
IQSS/dataverse
https://api.github.com/repos/IQSS/dataverse
closed
Add Additional Controlled Vocabulary Terms to Life Sciences Metadata for TB Dataverse
Component: Metadata Priority: High Status: QA Type: Feature
Will need to modify the Life Sciences Metadata to add the following controlled vocabulary terms in the following fields, will check if there are any slight variants of these terms that are more standardly used: **StudyDesignType** - Cohort Study Design (ontology: http://bioportal.bioontology.org/ontologies/OCRE/?p=classes&conceptid=http%3A%2F%2Fpurl.org%2Fnet%2FOCRe%2Fstudy_design.owl%23OCRE100078) - Randomized Controlled Trial (ontology: http://bioportal.bioontology.org/ontologies/EDDA?p=classes&conceptid=http%3A%2F%2Fedda.dbmi.pitt.edu%2Fontologies%2FStudyDesigns.owl%23randomized_controlled_design) - Nested Case Control Design (ontology: http://bioportal.bioontology.org/ontologies/EDDA/?p=classes&conceptid=http%3A%2F%2Fedda.dbmi.pitt.edu%2Fontologies%2FStudyDesigns.owl%23nested_case_control_design&jump_to_nav=true) **StudyFactorType** - drug susceptibility test **Organism** (note: make sure these are in NCBI Taxonomy) - Mycobacterium tuberculosis - Mycobacterium canetti - Mycobacterium africanium (SEK - in the taxonomy it's africanum) **Measurement** - targeted sequencing - drug susceptibility test **technology type** - culture based drug susceptibility testing, single concentration - culture based drug susceptibility testing, two concentrations - culture based drug susceptibility testing, three or more concentrations (minimium inhibitory concentration measurement) **technology platform** - Indirect proportion method on LJ medium - Indirect proportion method on Middlebrook Agar 7H9 - Indirect proportion method on Middlebrook Agar 7H10 - Indirect proportion method on Middlebrook Agar 7H11 - BD BACTEC MGIT 960 - BD BACTEC MGIT 320 - BD Radiometric BACTEC 460TB - microplate Alamar Blue (resazurin) colorimetric method
1.0
Add Additional Controlled Vocabulary Terms to Life Sciences Metadata for TB Dataverse - Will need to modify the Life Sciences Metadata to add the following controlled vocabulary terms in the following fields, will check if there are any slight variants of these terms that are more standardly used: **StudyDesignType** - Cohort Study Design (ontology: http://bioportal.bioontology.org/ontologies/OCRE/?p=classes&conceptid=http%3A%2F%2Fpurl.org%2Fnet%2FOCRe%2Fstudy_design.owl%23OCRE100078) - Randomized Controlled Trial (ontology: http://bioportal.bioontology.org/ontologies/EDDA?p=classes&conceptid=http%3A%2F%2Fedda.dbmi.pitt.edu%2Fontologies%2FStudyDesigns.owl%23randomized_controlled_design) - Nested Case Control Design (ontology: http://bioportal.bioontology.org/ontologies/EDDA/?p=classes&conceptid=http%3A%2F%2Fedda.dbmi.pitt.edu%2Fontologies%2FStudyDesigns.owl%23nested_case_control_design&jump_to_nav=true) **StudyFactorType** - drug susceptibility test **Organism** (note: make sure these are in NCBI Taxonomy) - Mycobacterium tuberculosis - Mycobacterium canetti - Mycobacterium africanium (SEK - in the taxonomy it's africanum) **Measurement** - targeted sequencing - drug susceptibility test **technology type** - culture based drug susceptibility testing, single concentration - culture based drug susceptibility testing, two concentrations - culture based drug susceptibility testing, three or more concentrations (minimium inhibitory concentration measurement) **technology platform** - Indirect proportion method on LJ medium - Indirect proportion method on Middlebrook Agar 7H9 - Indirect proportion method on Middlebrook Agar 7H10 - Indirect proportion method on Middlebrook Agar 7H11 - BD BACTEC MGIT 960 - BD BACTEC MGIT 320 - BD Radiometric BACTEC 460TB - microplate Alamar Blue (resazurin) colorimetric method
non_process
add additional controlled vocabulary terms to life sciences metadata for tb dataverse will need to modify the life sciences metadata to add the following controlled vocabulary terms in the following fields will check if there are any slight variants of these terms that are more standardly used studydesigntype cohort study design ontology randomized controlled trial ontology nested case control design ontology studyfactortype drug susceptibility test organism note make sure these are in ncbi taxonomy mycobacterium tuberculosis mycobacterium canetti mycobacterium africanium sek in the taxonomy it s africanum measurement targeted sequencing drug susceptibility test technology type culture based drug susceptibility testing single concentration culture based drug susceptibility testing two concentrations culture based drug susceptibility testing three or more concentrations minimium inhibitory concentration measurement technology platform indirect proportion method on lj medium indirect proportion method on middlebrook agar indirect proportion method on middlebrook agar indirect proportion method on middlebrook agar bd bactec mgit bd bactec mgit bd radiometric bactec microplate alamar blue resazurin colorimetric method
0
106,623
13,327,927,923
IssuesEvent
2020-08-27 13:51:01
nim-lang/Nim
https://api.github.com/repos/nim-lang/Nim
closed
The StmtList processing of template parameters can lead to unexpected errors
Language design
Consider the following code: ``` template foo(x: stmt) = discard foo: var a = 10 foo: var a = 20 # error: redefinition of `a` ``` Even though the pass-in blocks are discarded, they pollute the scope where `foo` is used and can produce redefinition errors as in the example above.
1.0
The StmtList processing of template parameters can lead to unexpected errors - Consider the following code: ``` template foo(x: stmt) = discard foo: var a = 10 foo: var a = 20 # error: redefinition of `a` ``` Even though the pass-in blocks are discarded, they pollute the scope where `foo` is used and can produce redefinition errors as in the example above.
non_process
the stmtlist processing of template parameters can lead to unexpected errors consider the following code template foo x stmt discard foo var a foo var a error redefinition of a even though the pass in blocks are discarded they pollute the scope where foo is used and can produce redefinition errors as in the example above
0
163,420
25,810,033,399
IssuesEvent
2022-12-11 19:19:14
nyx-space/nyx
https://api.github.com/repos/nyx-space/nyx
opened
Support the True Equator Mean Equinox (TEME) frame
QA:Design Kind:New feature Topic: Mission Design Priority: normal
# High level description Original discussion here: https://gitlab.com/nyx-space/nyx/-/issues/229 References: + OREKit: https://gitlab.orekit.org/orekit/orekit/-/blob/develop/src/main/java/org/orekit/frames/TEMEProvider.java + AstroPy: https://github.com/astropy/astropy/blob/45abe1d37242c1a256bc59f234b4d3e83e342351/astropy/coordinates/builtin_frames/intermediate_rotation_transforms.py#L255 (and test: https://github.com/astropy/astropy/blob/ba9b5bff7f4dd431d99590e0f33d4dfbf98b4231/astropy/coordinates/tests/test_intermediate_transformations.py#L561 ) + SkyField: https://github.com/skyfielders/python-skyfield/blob/8d3dc132f43656766a58e3e1984363592aa1f11a/skyfield/sgp4lib.py#L328 + asteRISK: https://rdrr.io/cran/asteRisk/src/R/coordinatesTransformations.R a.i. solution documentation: https://ai-solutions.com/_help_Files/orbit_reference_frames.htm#achr_trueequatormeanequinox This seems to require the UT1 computed, cf. #92 and https://github.com/nyx-space/hifitime/issues/43 . **Note: only the high level description needs to be filled out to report an issue or to request a new feature.** ## New feature _If this is a new feature, describe the need you have either with use cases or examples. If this is a bug report, file a bug report instead._ # Requirements The purpose of this section is to fill out the [Requirements](https://quality.nyxspace.com/process/requirements/) of the QA process. Requirements answer the question: what does the system need to do? It does not answer the question of how does the system do this? ## Test plans How do we test that these requirements are fulfilled correctly? What are some edge cases we should be aware of when developing the test code. # Design This is the [design](https://quality.nyxspace.com/process/design/) section. Each subsection has its own subsection in the quality assurance document. ## Algorithm demonstration If this issue requires a change in an algorithm, it should be described here. This algorithm should be described thoroughly enough to be used as documentation. This section may also simply refer to an algorithm in the literature or in another piece of software that has been validated. The quality of that reference will be determined case by case. ## API definition Define how the Nyx APIs will be affect by this: what are new functions available, do any previous function change their definition, why call these functions by that name, etc. ## High level architecture Document, discuss, and optionally upload design diagram into this section. ## Detailed design The detailed design **will* be used in the documentation of how Nyx works. _Feel free to fill out additional QA sections here, but these will typically be determined during the development, including the release in which this issue will be tackled._
2.0
Support the True Equator Mean Equinox (TEME) frame - # High level description Original discussion here: https://gitlab.com/nyx-space/nyx/-/issues/229 References: + OREKit: https://gitlab.orekit.org/orekit/orekit/-/blob/develop/src/main/java/org/orekit/frames/TEMEProvider.java + AstroPy: https://github.com/astropy/astropy/blob/45abe1d37242c1a256bc59f234b4d3e83e342351/astropy/coordinates/builtin_frames/intermediate_rotation_transforms.py#L255 (and test: https://github.com/astropy/astropy/blob/ba9b5bff7f4dd431d99590e0f33d4dfbf98b4231/astropy/coordinates/tests/test_intermediate_transformations.py#L561 ) + SkyField: https://github.com/skyfielders/python-skyfield/blob/8d3dc132f43656766a58e3e1984363592aa1f11a/skyfield/sgp4lib.py#L328 + asteRISK: https://rdrr.io/cran/asteRisk/src/R/coordinatesTransformations.R a.i. solution documentation: https://ai-solutions.com/_help_Files/orbit_reference_frames.htm#achr_trueequatormeanequinox This seems to require the UT1 computed, cf. #92 and https://github.com/nyx-space/hifitime/issues/43 . **Note: only the high level description needs to be filled out to report an issue or to request a new feature.** ## New feature _If this is a new feature, describe the need you have either with use cases or examples. If this is a bug report, file a bug report instead._ # Requirements The purpose of this section is to fill out the [Requirements](https://quality.nyxspace.com/process/requirements/) of the QA process. Requirements answer the question: what does the system need to do? It does not answer the question of how does the system do this? ## Test plans How do we test that these requirements are fulfilled correctly? What are some edge cases we should be aware of when developing the test code. # Design This is the [design](https://quality.nyxspace.com/process/design/) section. Each subsection has its own subsection in the quality assurance document. ## Algorithm demonstration If this issue requires a change in an algorithm, it should be described here. This algorithm should be described thoroughly enough to be used as documentation. This section may also simply refer to an algorithm in the literature or in another piece of software that has been validated. The quality of that reference will be determined case by case. ## API definition Define how the Nyx APIs will be affect by this: what are new functions available, do any previous function change their definition, why call these functions by that name, etc. ## High level architecture Document, discuss, and optionally upload design diagram into this section. ## Detailed design The detailed design **will* be used in the documentation of how Nyx works. _Feel free to fill out additional QA sections here, but these will typically be determined during the development, including the release in which this issue will be tackled._
non_process
support the true equator mean equinox teme frame high level description original discussion here references orekit astropy and test skyfield asterisk a i solution documentation this seems to require the computed cf and note only the high level description needs to be filled out to report an issue or to request a new feature new feature if this is a new feature describe the need you have either with use cases or examples if this is a bug report file a bug report instead requirements the purpose of this section is to fill out the of the qa process requirements answer the question what does the system need to do it does not answer the question of how does the system do this test plans how do we test that these requirements are fulfilled correctly what are some edge cases we should be aware of when developing the test code design this is the section each subsection has its own subsection in the quality assurance document algorithm demonstration if this issue requires a change in an algorithm it should be described here this algorithm should be described thoroughly enough to be used as documentation this section may also simply refer to an algorithm in the literature or in another piece of software that has been validated the quality of that reference will be determined case by case api definition define how the nyx apis will be affect by this what are new functions available do any previous function change their definition why call these functions by that name etc high level architecture document discuss and optionally upload design diagram into this section detailed design the detailed design will be used in the documentation of how nyx works feel free to fill out additional qa sections here but these will typically be determined during the development including the release in which this issue will be tackled
0
78,179
27,358,514,300
IssuesEvent
2023-02-27 14:25:57
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
DSLContext ddl does not correctly export view contents from existing database
T: Defect
### Expected behavior Something like this create view "public"."demo_description"("id","name","description") as (SELECT de.id, de.name, ds.text AS description FROM demo de JOIN description ds ON ds.id = de.id;) ### Actual behavior create view "public"."demo_description"( "id", "name", "description" ) as ### Steps to reproduce the problem 1. Connect Spring boot application to existing DataSource 2. Try to generate DDL information about tables ### jOOQ Version JOOQ Community Edition 3.17.8 ### Database product and version PostgreSQL 13.3 on x86_64-apple-darwin19.6.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.62), 64-bit ### Java Version openjdk 11.0.10 2021-01-19 ### OS Version MacOS 13.0 ### JDBC driver name and version (include name if unofficial driver) org.postgresql:postgresql:42.5.4
1.0
DSLContext ddl does not correctly export view contents from existing database - ### Expected behavior Something like this create view "public"."demo_description"("id","name","description") as (SELECT de.id, de.name, ds.text AS description FROM demo de JOIN description ds ON ds.id = de.id;) ### Actual behavior create view "public"."demo_description"( "id", "name", "description" ) as ### Steps to reproduce the problem 1. Connect Spring boot application to existing DataSource 2. Try to generate DDL information about tables ### jOOQ Version JOOQ Community Edition 3.17.8 ### Database product and version PostgreSQL 13.3 on x86_64-apple-darwin19.6.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.62), 64-bit ### Java Version openjdk 11.0.10 2021-01-19 ### OS Version MacOS 13.0 ### JDBC driver name and version (include name if unofficial driver) org.postgresql:postgresql:42.5.4
non_process
dslcontext ddl does not correctly export view contents from existing database expected behavior something like this create view public demo description id name description as select de id de name ds text as description from demo de join description ds on ds id de id actual behavior create view public demo description id name description as steps to reproduce the problem connect spring boot application to existing datasource try to generate ddl information about tables jooq version jooq community edition database product and version postgresql on apple compiled by apple clang version clang bit java version openjdk os version macos jdbc driver name and version include name if unofficial driver org postgresql postgresql
0
22,710
32,037,050,917
IssuesEvent
2023-09-22 16:07:00
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Status of Bazel 7.0.0-pre.20230917.3
P1 type: process release team-OSS
- Expected release date: 2023-09-22 Task list: - [x] Pick release baseline: [1cf392ff](https://github.com/bazelbuild/bazel/commit/1cf392ff3918386858b8c038f82c013b1e04be98) with cherrypicks [32563ca1](https://github.com/bazelbuild/bazel/commit/32563ca1728a69437b26efa19d18eebfcecc4765) [19f5e933](https://github.com/bazelbuild/bazel/commit/19f5e933d3fc91848b2b786cb11a6decaa96cf6e) - [x] Create release candidate: https://releases.bazel.build/7.0.0/rolling/7.0.0-pre.20230917.3rc1/index.html - [x] Post-submit: https://buildkite.com/bazel/bazel-bazel/builds?branch=release-7.0.0-pre.20230917.3rc1 - [x] Push the release: https://releases.bazel.build/7.0.0/rolling/7.0.0-pre.20230917.3/index.html - [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
1.0
Status of Bazel 7.0.0-pre.20230917.3 - - Expected release date: 2023-09-22 Task list: - [x] Pick release baseline: [1cf392ff](https://github.com/bazelbuild/bazel/commit/1cf392ff3918386858b8c038f82c013b1e04be98) with cherrypicks [32563ca1](https://github.com/bazelbuild/bazel/commit/32563ca1728a69437b26efa19d18eebfcecc4765) [19f5e933](https://github.com/bazelbuild/bazel/commit/19f5e933d3fc91848b2b786cb11a6decaa96cf6e) - [x] Create release candidate: https://releases.bazel.build/7.0.0/rolling/7.0.0-pre.20230917.3rc1/index.html - [x] Post-submit: https://buildkite.com/bazel/bazel-bazel/builds?branch=release-7.0.0-pre.20230917.3rc1 - [x] Push the release: https://releases.bazel.build/7.0.0/rolling/7.0.0-pre.20230917.3/index.html - [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
process
status of bazel pre expected release date task list pick release baseline with cherrypicks create release candidate post submit push the release update the
1
35,328
6,444,684,728
IssuesEvent
2017-08-12 15:38:14
haskell/cabal
https://api.github.com/repos/haskell/cabal
closed
Wiki release instructions are out of date
documentation release
The `Makefile` and instructions on https://github.com/haskell/cabal/wiki/Making-a-release are out of date: ``` make: *** No rule to make target 'doc/*.markdown', needed by 'dist/doc/users-guide/index.html'. Stop. ``` I think we just need to update the `user-guide` target to use Sphinx instead of `pandoc` for user guide generation.
1.0
Wiki release instructions are out of date - The `Makefile` and instructions on https://github.com/haskell/cabal/wiki/Making-a-release are out of date: ``` make: *** No rule to make target 'doc/*.markdown', needed by 'dist/doc/users-guide/index.html'. Stop. ``` I think we just need to update the `user-guide` target to use Sphinx instead of `pandoc` for user guide generation.
non_process
wiki release instructions are out of date the makefile and instructions on are out of date make no rule to make target doc markdown needed by dist doc users guide index html stop i think we just need to update the user guide target to use sphinx instead of pandoc for user guide generation
0
16,608
21,664,123,411
IssuesEvent
2022-05-07 00:23:53
dinobossytnew/Traduciones-
https://api.github.com/repos/dinobossytnew/Traduciones-
closed
Nuevo y Por añadir
EN PROCESSO
- [x] * PlayerKits - [x] * AdvancedKits - [x] * PyroFishingPro - [x] * PyroMiningPro - [x] * PyroSpawners - [x] * Mmoitems - [x] * ExecutableItems - [x] * EpicCraftingsPlus
1.0
Nuevo y Por añadir - - [x] * PlayerKits - [x] * AdvancedKits - [x] * PyroFishingPro - [x] * PyroMiningPro - [x] * PyroSpawners - [x] * Mmoitems - [x] * ExecutableItems - [x] * EpicCraftingsPlus
process
nuevo y por añadir playerkits advancedkits pyrofishingpro pyrominingpro pyrospawners mmoitems executableitems epiccraftingsplus
1
167,788
6,346,566,981
IssuesEvent
2017-07-28 02:47:07
connectivedx/fuzzy-chainsaw
https://api.github.com/repos/connectivedx/fuzzy-chainsaw
closed
Missing enhanced-resolve module
high-priority
Fresh version of FC2.0 on Windows wont run watch or build. Appears to be a missing **enhanced-resolve** module not found in FC package.json. After running "npm install enhanced-resolve", my watch and builds all started working as anticipated. Lets make sure we don't forget to include this. Thanks!
1.0
Missing enhanced-resolve module - Fresh version of FC2.0 on Windows wont run watch or build. Appears to be a missing **enhanced-resolve** module not found in FC package.json. After running "npm install enhanced-resolve", my watch and builds all started working as anticipated. Lets make sure we don't forget to include this. Thanks!
non_process
missing enhanced resolve module fresh version of on windows wont run watch or build appears to be a missing enhanced resolve module not found in fc package json after running npm install enhanced resolve my watch and builds all started working as anticipated lets make sure we don t forget to include this thanks
0
9,880
12,886,542,479
IssuesEvent
2020-07-13 09:39:10
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
opened
Anomaly calculation for OBS got broken early march.
bug preprocessor
**Describe the bug** Unfortunately, it seems like none of the tests has flagged (something to look into later I would say!). But for several observational datasets the calculation of anomalies goes wrong with non-physical values coming out of the preprocessor. I could track down the problems to 3-4 March 2020. With everything working fine on the 3rd of March (`git checkout 'master@{2020-03-03}'`) and wrong results from 4 March onwards (`git checkout 'master@{2020-03-04}'`). To create the plots and run the recipe, one needs a specific ESMValTool branch: `git checkout C3S_511_MPQB`. But reproducing it and simply inspecting NetCDF output files would work as well of course. Since the changes in the `anomalies` preprocessor were authored by @jvegasbsc my hope is that he can solve this issue. It would be a good additional check if someone can reproduce the error (@hb326 @BenMGeo or @mattiarighi). The error also occurred for `ERA5`, a dataset that is more widely used. ![lineplot_sm_2015-2018_GOOD](https://user-images.githubusercontent.com/27730548/87287705-b83b8d00-c4fa-11ea-8eda-466edc42f436.png) ![lineplot_sm_2015-2018_WRONG](https://user-images.githubusercontent.com/27730548/87287719-bbcf1400-c4fa-11ea-89d2-49e13ac733f1.png) ``` diff --git a/esmvalcore/preprocessor/_time.py b/esmvalcore/preprocessor/_time.py index 1e29395e3..c1cf2071b 100644 --- a/esmvalcore/preprocessor/_time.py +++ b/esmvalcore/preprocessor/_time.py @@ -462,16 +462,15 @@ def anomalies(cube, period, reference): cube_time = cube.coord('time') ref = {} for ref_slice in reference.slices_over(ref_coord): - ref[ref_slice.coord(ref_coord).points[0]] = da.ravel( - ref_slice.core_data()) + ref[ref_slice.coord(ref_coord).points[0]] = ref_slice.core_data() + cube_coord_dim = cube.coord_dims(cube_coord)[0] + slicer = [slice(None)] * len(data.shape) + new_data = [] for i in range(cube_time.shape[0]): - time = cube_time.points[i] - indexes = cube_time.points == time - indexes = iris.util.broadcast_to_shape(indexes, data.shape, - (cube_coord_dim, )) - data[indexes] = data[indexes] - ref[cube_coord.points[i]] - + slicer[cube_coord_dim] = i + new_data.append(data[tuple(slicer)] - ref[cube_coord.points[i]]) + data = da.stack(new_data, axis=cube_coord_dim) cube = cube.copy(data) cube.remove_coord(cube_coord) return cube commit f61cc0e946dd4cd1de02fb738046f5273db16025 Author: Javier Vegas <javier.vegas@bsc.es> Date: Tue Jan 21 16:31:42 2020 +0100 Remove print and extra coordinate diff --git a/esmvalcore/preprocessor/_time.py b/esmvalcore/preprocessor/_time.py index 612bfa47a..1e29395e3 100644 --- a/esmvalcore/preprocessor/_time.py +++ b/esmvalcore/preprocessor/_time.py @@ -411,7 +411,6 @@ def climate_statistics(cube, operator='mean', period='full'): operator = get_iris_analysis_operation(operator) clim_cube = cube.aggregated_by(clim_coord, operator) clim_cube.remove_coord('time') - print(clim_cube) if clim_cube.coord(clim_coord.name()).is_monotonic(): iris.util.promote_aux_coord_to_dim_coord(clim_cube, clim_coord.name()) else: @@ -474,6 +473,7 @@ def anomalies(cube, period, reference): data[indexes] = data[indexes] - ref[cube_coord.points[i]] cube = cube.copy(data) + cube.remove_coord(cube_coord) return cube ``` **Please attach** - The recipe that you are trying to run, you can find a copy in the `run` directory in the output directory - The `main_log_debug.txt` file, this can also be found in the `run` directory in the output directory ``` # ESMValTool # recipe_anom_bug.yml --- documentation: description: | Recipe for demonstrating a bug. To get wrong results run `git checkout ' master@{2020-03-04}'` in ESMValCore dir. To get good results run `git checkout 'master@{2020-03-03}'` in ESMValCore dir. Use branch `C3S_511_MPQB` from ESMValTool. authors: - crezee_bas ################################################ # Define some default parameters using anchors # ################################################ commongrid: &commongrid regrid: target_grid: 0.25x0.25 scheme: nearest regrid_time: # this is needed for a fully homogeneous time coordinate frequency: mon icefreeland: &icefreeland mask_landsea: mask_out: sea mask_glaciated: mask_out: glaciated commonmask: &commonmask # should be preceded by commongrid mask_fillvalues: threshold_fraction: 0.0 # keep all missing values min_value: -1e20 # small enough not to alter the data nonnegative: &nonnegative clip: minimum: 0.0 ################################################ ################################################ ################################################ datasets_from_1992_2019: &datasets_from_1992_2019 additional_datasets: - {dataset: CDS-SATELLITE-SOIL-MOISTURE, type: sat, project: OBS, mip: Lmon, version: CUSTOM-TCDR-ICDR-20200602, tier: 3, start_year: 2015, end_year: 2018} # - {dataset: CDS-SATELLITE-SOIL-MOISTURE, project: OBS, tier: 3, type: sat, # version: CUSTOM-TCDR-ICDR-20200602, start_year: 1992, end_year: 2019, mip: Lmon} # - {dataset: cds-era5-land-monthly, type: reanaly, project: OBS, mip: Lmon, # version: 1, tier: 3, start_year: 1992, end_year: 2019} # - {dataset: cds-era5-monthly, type: reanaly, project: OBS, mip: Lmon, # version: 1, tier: 3, start_year: 1992, end_year: 2019} # - {dataset: MERRA2, type: reanaly, project: OBS6, mip: Lmon, # version: 5.12.4, tier: 3, start_year: 1992, end_year: 2019} preprocessors: pp_lineplots_ano: custom_order: true <<: *icefreeland <<: *commongrid <<: *commonmask <<: *nonnegative anomalies: period: monthly reference: [2015,2018] standardize: false area_statistics: operator: mean diagnostics: lineplots_ano: variables: sm: preprocessor: pp_lineplots_ano mip: Lmon scripts: lineplot: script: mpqb/mpqb_lineplot.py <<: *datasets_from_1992_2019 ```
1.0
Anomaly calculation for OBS got broken early march. - **Describe the bug** Unfortunately, it seems like none of the tests has flagged (something to look into later I would say!). But for several observational datasets the calculation of anomalies goes wrong with non-physical values coming out of the preprocessor. I could track down the problems to 3-4 March 2020. With everything working fine on the 3rd of March (`git checkout 'master@{2020-03-03}'`) and wrong results from 4 March onwards (`git checkout 'master@{2020-03-04}'`). To create the plots and run the recipe, one needs a specific ESMValTool branch: `git checkout C3S_511_MPQB`. But reproducing it and simply inspecting NetCDF output files would work as well of course. Since the changes in the `anomalies` preprocessor were authored by @jvegasbsc my hope is that he can solve this issue. It would be a good additional check if someone can reproduce the error (@hb326 @BenMGeo or @mattiarighi). The error also occurred for `ERA5`, a dataset that is more widely used. ![lineplot_sm_2015-2018_GOOD](https://user-images.githubusercontent.com/27730548/87287705-b83b8d00-c4fa-11ea-8eda-466edc42f436.png) ![lineplot_sm_2015-2018_WRONG](https://user-images.githubusercontent.com/27730548/87287719-bbcf1400-c4fa-11ea-89d2-49e13ac733f1.png) ``` diff --git a/esmvalcore/preprocessor/_time.py b/esmvalcore/preprocessor/_time.py index 1e29395e3..c1cf2071b 100644 --- a/esmvalcore/preprocessor/_time.py +++ b/esmvalcore/preprocessor/_time.py @@ -462,16 +462,15 @@ def anomalies(cube, period, reference): cube_time = cube.coord('time') ref = {} for ref_slice in reference.slices_over(ref_coord): - ref[ref_slice.coord(ref_coord).points[0]] = da.ravel( - ref_slice.core_data()) + ref[ref_slice.coord(ref_coord).points[0]] = ref_slice.core_data() + cube_coord_dim = cube.coord_dims(cube_coord)[0] + slicer = [slice(None)] * len(data.shape) + new_data = [] for i in range(cube_time.shape[0]): - time = cube_time.points[i] - indexes = cube_time.points == time - indexes = iris.util.broadcast_to_shape(indexes, data.shape, - (cube_coord_dim, )) - data[indexes] = data[indexes] - ref[cube_coord.points[i]] - + slicer[cube_coord_dim] = i + new_data.append(data[tuple(slicer)] - ref[cube_coord.points[i]]) + data = da.stack(new_data, axis=cube_coord_dim) cube = cube.copy(data) cube.remove_coord(cube_coord) return cube commit f61cc0e946dd4cd1de02fb738046f5273db16025 Author: Javier Vegas <javier.vegas@bsc.es> Date: Tue Jan 21 16:31:42 2020 +0100 Remove print and extra coordinate diff --git a/esmvalcore/preprocessor/_time.py b/esmvalcore/preprocessor/_time.py index 612bfa47a..1e29395e3 100644 --- a/esmvalcore/preprocessor/_time.py +++ b/esmvalcore/preprocessor/_time.py @@ -411,7 +411,6 @@ def climate_statistics(cube, operator='mean', period='full'): operator = get_iris_analysis_operation(operator) clim_cube = cube.aggregated_by(clim_coord, operator) clim_cube.remove_coord('time') - print(clim_cube) if clim_cube.coord(clim_coord.name()).is_monotonic(): iris.util.promote_aux_coord_to_dim_coord(clim_cube, clim_coord.name()) else: @@ -474,6 +473,7 @@ def anomalies(cube, period, reference): data[indexes] = data[indexes] - ref[cube_coord.points[i]] cube = cube.copy(data) + cube.remove_coord(cube_coord) return cube ``` **Please attach** - The recipe that you are trying to run, you can find a copy in the `run` directory in the output directory - The `main_log_debug.txt` file, this can also be found in the `run` directory in the output directory ``` # ESMValTool # recipe_anom_bug.yml --- documentation: description: | Recipe for demonstrating a bug. To get wrong results run `git checkout ' master@{2020-03-04}'` in ESMValCore dir. To get good results run `git checkout 'master@{2020-03-03}'` in ESMValCore dir. Use branch `C3S_511_MPQB` from ESMValTool. authors: - crezee_bas ################################################ # Define some default parameters using anchors # ################################################ commongrid: &commongrid regrid: target_grid: 0.25x0.25 scheme: nearest regrid_time: # this is needed for a fully homogeneous time coordinate frequency: mon icefreeland: &icefreeland mask_landsea: mask_out: sea mask_glaciated: mask_out: glaciated commonmask: &commonmask # should be preceded by commongrid mask_fillvalues: threshold_fraction: 0.0 # keep all missing values min_value: -1e20 # small enough not to alter the data nonnegative: &nonnegative clip: minimum: 0.0 ################################################ ################################################ ################################################ datasets_from_1992_2019: &datasets_from_1992_2019 additional_datasets: - {dataset: CDS-SATELLITE-SOIL-MOISTURE, type: sat, project: OBS, mip: Lmon, version: CUSTOM-TCDR-ICDR-20200602, tier: 3, start_year: 2015, end_year: 2018} # - {dataset: CDS-SATELLITE-SOIL-MOISTURE, project: OBS, tier: 3, type: sat, # version: CUSTOM-TCDR-ICDR-20200602, start_year: 1992, end_year: 2019, mip: Lmon} # - {dataset: cds-era5-land-monthly, type: reanaly, project: OBS, mip: Lmon, # version: 1, tier: 3, start_year: 1992, end_year: 2019} # - {dataset: cds-era5-monthly, type: reanaly, project: OBS, mip: Lmon, # version: 1, tier: 3, start_year: 1992, end_year: 2019} # - {dataset: MERRA2, type: reanaly, project: OBS6, mip: Lmon, # version: 5.12.4, tier: 3, start_year: 1992, end_year: 2019} preprocessors: pp_lineplots_ano: custom_order: true <<: *icefreeland <<: *commongrid <<: *commonmask <<: *nonnegative anomalies: period: monthly reference: [2015,2018] standardize: false area_statistics: operator: mean diagnostics: lineplots_ano: variables: sm: preprocessor: pp_lineplots_ano mip: Lmon scripts: lineplot: script: mpqb/mpqb_lineplot.py <<: *datasets_from_1992_2019 ```
process
anomaly calculation for obs got broken early march describe the bug unfortunately it seems like none of the tests has flagged something to look into later i would say but for several observational datasets the calculation of anomalies goes wrong with non physical values coming out of the preprocessor i could track down the problems to march with everything working fine on the of march git checkout master and wrong results from march onwards git checkout master to create the plots and run the recipe one needs a specific esmvaltool branch git checkout mpqb but reproducing it and simply inspecting netcdf output files would work as well of course since the changes in the anomalies preprocessor were authored by jvegasbsc my hope is that he can solve this issue it would be a good additional check if someone can reproduce the error benmgeo or mattiarighi the error also occurred for a dataset that is more widely used diff git a esmvalcore preprocessor time py b esmvalcore preprocessor time py index a esmvalcore preprocessor time py b esmvalcore preprocessor time py def anomalies cube period reference cube time cube coord time ref for ref slice in reference slices over ref coord ref da ravel ref slice core data ref ref slice core data cube coord dim cube coord dims cube coord slicer len data shape new data for i in range cube time shape time cube time points indexes cube time points time indexes iris util broadcast to shape indexes data shape cube coord dim data data ref slicer i new data append data ref data da stack new data axis cube coord dim cube cube copy data cube remove coord cube coord return cube commit author javier vegas date tue jan remove print and extra coordinate diff git a esmvalcore preprocessor time py b esmvalcore preprocessor time py index a esmvalcore preprocessor time py b esmvalcore preprocessor time py def climate statistics cube operator mean period full operator get iris analysis operation operator clim cube cube aggregated by clim coord operator clim cube remove coord time print clim cube if clim cube coord clim coord name is monotonic iris util promote aux coord to dim coord clim cube clim coord name else def anomalies cube period reference data data ref cube cube copy data cube remove coord cube coord return cube please attach the recipe that you are trying to run you can find a copy in the run directory in the output directory the main log debug txt file this can also be found in the run directory in the output directory esmvaltool recipe anom bug yml documentation description recipe for demonstrating a bug to get wrong results run git checkout master in esmvalcore dir to get good results run git checkout master in esmvalcore dir use branch mpqb from esmvaltool authors crezee bas define some default parameters using anchors commongrid commongrid regrid target grid scheme nearest regrid time this is needed for a fully homogeneous time coordinate frequency mon icefreeland icefreeland mask landsea mask out sea mask glaciated mask out glaciated commonmask commonmask should be preceded by commongrid mask fillvalues threshold fraction keep all missing values min value small enough not to alter the data nonnegative nonnegative clip minimum datasets from datasets from additional datasets dataset cds satellite soil moisture type sat project obs mip lmon version custom tcdr icdr tier start year end year dataset cds satellite soil moisture project obs tier type sat version custom tcdr icdr start year end year mip lmon dataset cds land monthly type reanaly project obs mip lmon version tier start year end year dataset cds monthly type reanaly project obs mip lmon version tier start year end year dataset type reanaly project mip lmon version tier start year end year preprocessors pp lineplots ano custom order true icefreeland commongrid commonmask nonnegative anomalies period monthly reference standardize false area statistics operator mean diagnostics lineplots ano variables sm preprocessor pp lineplots ano mip lmon scripts lineplot script mpqb mpqb lineplot py datasets from
1
19,981
26,460,777,566
IssuesEvent
2023-01-16 17:21:31
apache/arrow-rs
https://api.github.com/repos/apache/arrow-rs
closed
Release `object_store` `0.5.3` (next release after`0.5.2`)
development-process object-store
Follow on from https://github.com/apache/arrow-rs/issues/3229 * Planned Release Candidate: ~TBD~ 2023-01-05 * Planned Release and Publish to crates.io: ~TBD~ 2023-01-08 Items: - [ ] Update changelog and readme: - [ ] Create release candidate: - [ ] Release candidate approved: - [ ] Release to crates.io:
1.0
Release `object_store` `0.5.3` (next release after`0.5.2`) - Follow on from https://github.com/apache/arrow-rs/issues/3229 * Planned Release Candidate: ~TBD~ 2023-01-05 * Planned Release and Publish to crates.io: ~TBD~ 2023-01-08 Items: - [ ] Update changelog and readme: - [ ] Create release candidate: - [ ] Release candidate approved: - [ ] Release to crates.io:
process
release object store next release after follow on from planned release candidate tbd planned release and publish to crates io tbd items update changelog and readme create release candidate release candidate approved release to crates io
1
404,286
27,457,061,159
IssuesEvent
2023-03-02 22:22:41
gbowne1/reactsocialnetwork
https://api.github.com/repos/gbowne1/reactsocialnetwork
opened
[Update] Improve our project README.md
bug documentation enhancement help wanted question
Our project README.md could use some improvements. It's ok, but leaves a lot to be desired when first looking at the project Suggest adding stuff from <https://shields.io/> as a first task - [ ] Add shields from <https://shields.io/>
1.0
[Update] Improve our project README.md - Our project README.md could use some improvements. It's ok, but leaves a lot to be desired when first looking at the project Suggest adding stuff from <https://shields.io/> as a first task - [ ] Add shields from <https://shields.io/>
non_process
improve our project readme md our project readme md could use some improvements it s ok but leaves a lot to be desired when first looking at the project suggest adding stuff from as a first task add shields from
0
2,458
5,240,724,469
IssuesEvent
2017-01-31 13:58:23
rubberduck-vba/Rubberduck
https://api.github.com/repos/rubberduck-vba/Rubberduck
reopened
VBA implicitly extends the IControl interface to controls in MSForms - RD doesn't
bug parse-tree-processing
I noticed this after the reimplementation of `MemberNotOnInterfaceInspection`. This code (in a UserForm) triggers an inspection result for the call to `SetFocus`: ``` Private Sub UserForm_Activate() TextBox1.SetFocus End Sub ``` I initially thought that there was a problem in the type name binding for controls, but it correctly resolves as `IMdcText`, so I checked the typelib for fm20.dll and discovered that none of the controls are declared as implementing the `IControl` interface (`SetFocus` is a member of `IControl`). The TextBox2 coclass doesn't implement it either: ``` [ uuid(8BD21D10-EC42-11CE-9E0D-00AA006002F3), helpcontext(0x001e871e), noncreatable, control ] coclass TextBox { [default] interface IMdcText; [default, source] dispinterface MdcTextEvents; }; ``` This is only a guess, but I'm thinking that either OLE or VBA handles the `IControl` interface because it deals mainly with form placement, data binding, display settings, etc. The `control` flag (`TYPEFLAG_FCONTROL` in the typelib) looks like a likely candidate for triggering this behavior. RD should apply the `IControl` members for `UserForm` controls to allow them to bind properly.
1.0
VBA implicitly extends the IControl interface to controls in MSForms - RD doesn't - I noticed this after the reimplementation of `MemberNotOnInterfaceInspection`. This code (in a UserForm) triggers an inspection result for the call to `SetFocus`: ``` Private Sub UserForm_Activate() TextBox1.SetFocus End Sub ``` I initially thought that there was a problem in the type name binding for controls, but it correctly resolves as `IMdcText`, so I checked the typelib for fm20.dll and discovered that none of the controls are declared as implementing the `IControl` interface (`SetFocus` is a member of `IControl`). The TextBox2 coclass doesn't implement it either: ``` [ uuid(8BD21D10-EC42-11CE-9E0D-00AA006002F3), helpcontext(0x001e871e), noncreatable, control ] coclass TextBox { [default] interface IMdcText; [default, source] dispinterface MdcTextEvents; }; ``` This is only a guess, but I'm thinking that either OLE or VBA handles the `IControl` interface because it deals mainly with form placement, data binding, display settings, etc. The `control` flag (`TYPEFLAG_FCONTROL` in the typelib) looks like a likely candidate for triggering this behavior. RD should apply the `IControl` members for `UserForm` controls to allow them to bind properly.
process
vba implicitly extends the icontrol interface to controls in msforms rd doesn t i noticed this after the reimplementation of membernotoninterfaceinspection this code in a userform triggers an inspection result for the call to setfocus private sub userform activate setfocus end sub i initially thought that there was a problem in the type name binding for controls but it correctly resolves as imdctext so i checked the typelib for dll and discovered that none of the controls are declared as implementing the icontrol interface setfocus is a member of icontrol the coclass doesn t implement it either uuid helpcontext noncreatable control coclass textbox interface imdctext dispinterface mdctextevents this is only a guess but i m thinking that either ole or vba handles the icontrol interface because it deals mainly with form placement data binding display settings etc the control flag typeflag fcontrol in the typelib looks like a likely candidate for triggering this behavior rd should apply the icontrol members for userform controls to allow them to bind properly
1
395,530
11,687,899,331
IssuesEvent
2020-03-05 13:39:55
benetech/ServiceNet
https://api.github.com/repos/benetech/ServiceNet
closed
Custom record views - need a UI tweak
PM: Workflow Priority C
Now that I see the "custom record views" that allows me to dictate the visible fields (love it) I have a small request to make the capability a little more obvious. How about to the left of the gear/cog icon we have the prose: "Show less fields" to make it more obvious that is what it is there for? ![CaptureA](https://user-images.githubusercontent.com/52297301/74120807-bb140980-4b79-11ea-88c5-0a92c1c8e7d1.PNG)
1.0
Custom record views - need a UI tweak - Now that I see the "custom record views" that allows me to dictate the visible fields (love it) I have a small request to make the capability a little more obvious. How about to the left of the gear/cog icon we have the prose: "Show less fields" to make it more obvious that is what it is there for? ![CaptureA](https://user-images.githubusercontent.com/52297301/74120807-bb140980-4b79-11ea-88c5-0a92c1c8e7d1.PNG)
non_process
custom record views need a ui tweak now that i see the custom record views that allows me to dictate the visible fields love it i have a small request to make the capability a little more obvious how about to the left of the gear cog icon we have the prose show less fields to make it more obvious that is what it is there for
0
43,739
11,299,992,940
IssuesEvent
2020-01-17 12:35:52
microsoft/WindowsTemplateStudio
https://api.github.com/repos/microsoft/WindowsTemplateStudio
opened
Build dev.templates.tests.full_20200117.2 failed
bug vsts-build
## Build dev.templates.tests.full_20200117.2 - **Build result:** `failed` - **Build queued:** 1/17/2020 10:12:45 AM - **Build duration:** 142.64 minutes ### Details Build [dev.templates.tests.full_20200117.2](https://winappstudio.visualstudio.com/web/build.aspx?pcguid=a4ef43be-68ce-4195-a619-079b4d9834c2&builduri=vstfs%3a%2f%2f%2fBuild%2fBuild%2f32586) failed + xunit.console.exe : Build_Empty_Legacy_AddRightClick_Uwp(projectType: "SplitView", framework: "CodeBehind", platform: "Uwp", language: "C#") [FAIL] At pbatch:27 char:27 + + CategoryInfo : NotSpecified: ( Build_Empty...e: "C#") [FAIL]:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError + PSComputerName : [localhost] + Process completed with exit code 1 and had 1 error(s) written to the error stream. Find detailed information in the [build log files]()
1.0
Build dev.templates.tests.full_20200117.2 failed - ## Build dev.templates.tests.full_20200117.2 - **Build result:** `failed` - **Build queued:** 1/17/2020 10:12:45 AM - **Build duration:** 142.64 minutes ### Details Build [dev.templates.tests.full_20200117.2](https://winappstudio.visualstudio.com/web/build.aspx?pcguid=a4ef43be-68ce-4195-a619-079b4d9834c2&builduri=vstfs%3a%2f%2f%2fBuild%2fBuild%2f32586) failed + xunit.console.exe : Build_Empty_Legacy_AddRightClick_Uwp(projectType: "SplitView", framework: "CodeBehind", platform: "Uwp", language: "C#") [FAIL] At pbatch:27 char:27 + + CategoryInfo : NotSpecified: ( Build_Empty...e: "C#") [FAIL]:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError + PSComputerName : [localhost] + Process completed with exit code 1 and had 1 error(s) written to the error stream. Find detailed information in the [build log files]()
non_process
build dev templates tests full failed build dev templates tests full build result failed build queued am build duration minutes details build failed xunit console exe build empty legacy addrightclick uwp projecttype splitview framework codebehind platform uwp language c at pbatch char categoryinfo notspecified build empty e c string remoteexception fullyqualifiederrorid nativecommanderror pscomputername process completed with exit code and had error s written to the error stream find detailed information in the
0
15,785
19,976,139,847
IssuesEvent
2022-01-29 05:10:35
tushushu/ulist
https://api.github.com/repos/tushushu/ulist
closed
Implement where method for List
enhancement data processing
Like SQL `WHERE` statement, implement a `where` method to filter the `ulist` by given condition. For example: ```Python arr = ul.arange(5) arr.where(lambda x: x > 1) UltraFastList([2, 3, 4]) ```
1.0
Implement where method for List - Like SQL `WHERE` statement, implement a `where` method to filter the `ulist` by given condition. For example: ```Python arr = ul.arange(5) arr.where(lambda x: x > 1) UltraFastList([2, 3, 4]) ```
process
implement where method for list like sql where statement implement a where method to filter the ulist by given condition for example python arr ul arange arr where lambda x x ultrafastlist
1
7,455
10,561,185,784
IssuesEvent
2019-10-04 15:21:09
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
How to regrid fx files
preprocessor
I would like to regrid an fx file - `sftlf `- onto a 5x5 grid. I need this as input to an existing python package which is expecting tas, tos, siconc and sftlf in NetCDF files on a 5x5 grid, to produce masked and blended GMST. I can regrid `tas`, `tos` and `siconc` onto a 5x5 grid using the `regrid` preprocessor, by adding this to the recipe like this: ``` preprocessors: regrid_5_5: regrid: target_grid: 5x5 scheme: linear diagnostics: fig_test_attribute: description: Test of masked and blended surface temperature. variables: tas: preprocessor: regrid_5_5 mip: Amon field: T2Ms project: CMIP6 exp: historical grid: gr start_year: 1850 end_year: 2014 additional_datasets: - {dataset: CNRM-CM6-1, ensemble: r1i1p1f2} ``` This works OK. But if I try to do the same for sftlf by adding this: ``` sftlf: preprocessor: regrid_5_5 mip: fx field: F2Ms project: CMIP6 exp: historical grid: gr ``` Then I get this: `esmvaltool._recipe_checks.RecipeError: Missing keys {'end_year', 'start_year'} from variable sftlf in diagnostic fig_test_attribute` and the recipe does not run. If I add a dummy start year and end year, then I get: ``` raise RecipeError("No input files found for variable {}".format(var)) esmvaltool._recipe_checks.RecipeError: No input files found for variable {'preprocessor': 'regrid_5_5', 'mip': 'fx', 'field': 'F2Ms', 'project': 'CMIP6', 'exp': 'historical', 'grid': 'gr', 'start_year': 1850, 'end_year': 2014, 'variable_group': 'sftlf', 'short_name': 'sftlf', 'diagnostic': 'fig_test_attribute', 'dataset': 'CNRM-CM6-1', 'ensemble': 'r1i1p1f2', 'recipe_dataset_index': 0, 'cmor_table': 'CMIP6', 'institute': ['CNRM-CERFACS'], 'standard_name': 'land_area_fraction', 'long_name': 'Land Area Fraction', 'units': '%', 'modeling_realm': ['atmos'], 'frequency': 'fx', 'filename': '/scratch/b/b380746/esmvaltool_output/test_attribute_20190307_231334/preproc/fig_test_attribute/sftlf/CMIP6_CNRM-CM6-1_fx_historical_r1i1p1f2_F2Ms_sftlf_1850-2014.nc'} ``` and again the recipe doesn't run. I thought that specifying the field `F2Ms` should mean that years are not required, but this doesn't seem to be implemented. I don't actually want to mask the `tas`, `tos`, or `siconc` with `sftlf` at this point, I just want the regridded sftlf file. (The masking a blending code for example uses tas over sea ice, so masking tas with sftlf would not be helpful). Thanks very much!
1.0
How to regrid fx files - I would like to regrid an fx file - `sftlf `- onto a 5x5 grid. I need this as input to an existing python package which is expecting tas, tos, siconc and sftlf in NetCDF files on a 5x5 grid, to produce masked and blended GMST. I can regrid `tas`, `tos` and `siconc` onto a 5x5 grid using the `regrid` preprocessor, by adding this to the recipe like this: ``` preprocessors: regrid_5_5: regrid: target_grid: 5x5 scheme: linear diagnostics: fig_test_attribute: description: Test of masked and blended surface temperature. variables: tas: preprocessor: regrid_5_5 mip: Amon field: T2Ms project: CMIP6 exp: historical grid: gr start_year: 1850 end_year: 2014 additional_datasets: - {dataset: CNRM-CM6-1, ensemble: r1i1p1f2} ``` This works OK. But if I try to do the same for sftlf by adding this: ``` sftlf: preprocessor: regrid_5_5 mip: fx field: F2Ms project: CMIP6 exp: historical grid: gr ``` Then I get this: `esmvaltool._recipe_checks.RecipeError: Missing keys {'end_year', 'start_year'} from variable sftlf in diagnostic fig_test_attribute` and the recipe does not run. If I add a dummy start year and end year, then I get: ``` raise RecipeError("No input files found for variable {}".format(var)) esmvaltool._recipe_checks.RecipeError: No input files found for variable {'preprocessor': 'regrid_5_5', 'mip': 'fx', 'field': 'F2Ms', 'project': 'CMIP6', 'exp': 'historical', 'grid': 'gr', 'start_year': 1850, 'end_year': 2014, 'variable_group': 'sftlf', 'short_name': 'sftlf', 'diagnostic': 'fig_test_attribute', 'dataset': 'CNRM-CM6-1', 'ensemble': 'r1i1p1f2', 'recipe_dataset_index': 0, 'cmor_table': 'CMIP6', 'institute': ['CNRM-CERFACS'], 'standard_name': 'land_area_fraction', 'long_name': 'Land Area Fraction', 'units': '%', 'modeling_realm': ['atmos'], 'frequency': 'fx', 'filename': '/scratch/b/b380746/esmvaltool_output/test_attribute_20190307_231334/preproc/fig_test_attribute/sftlf/CMIP6_CNRM-CM6-1_fx_historical_r1i1p1f2_F2Ms_sftlf_1850-2014.nc'} ``` and again the recipe doesn't run. I thought that specifying the field `F2Ms` should mean that years are not required, but this doesn't seem to be implemented. I don't actually want to mask the `tas`, `tos`, or `siconc` with `sftlf` at this point, I just want the regridded sftlf file. (The masking a blending code for example uses tas over sea ice, so masking tas with sftlf would not be helpful). Thanks very much!
process
how to regrid fx files i would like to regrid an fx file sftlf onto a grid i need this as input to an existing python package which is expecting tas tos siconc and sftlf in netcdf files on a grid to produce masked and blended gmst i can regrid tas tos and siconc onto a grid using the regrid preprocessor by adding this to the recipe like this preprocessors regrid regrid target grid scheme linear diagnostics fig test attribute description test of masked and blended surface temperature variables tas preprocessor regrid mip amon field project exp historical grid gr start year end year additional datasets dataset cnrm ensemble this works ok but if i try to do the same for sftlf by adding this sftlf preprocessor regrid mip fx field project exp historical grid gr then i get this esmvaltool recipe checks recipeerror missing keys end year start year from variable sftlf in diagnostic fig test attribute and the recipe does not run if i add a dummy start year and end year then i get raise recipeerror no input files found for variable format var esmvaltool recipe checks recipeerror no input files found for variable preprocessor regrid mip fx field project exp historical grid gr start year end year variable group sftlf short name sftlf diagnostic fig test attribute dataset cnrm ensemble recipe dataset index cmor table institute standard name land area fraction long name land area fraction units modeling realm frequency fx filename scratch b esmvaltool output test attribute preproc fig test attribute sftlf cnrm fx historical sftlf nc and again the recipe doesn t run i thought that specifying the field should mean that years are not required but this doesn t seem to be implemented i don t actually want to mask the tas tos or siconc with sftlf at this point i just want the regridded sftlf file the masking a blending code for example uses tas over sea ice so masking tas with sftlf would not be helpful thanks very much
1
141,946
11,448,503,929
IssuesEvent
2020-02-06 03:40:34
LLNL/axom
https://api.github.com/repos/LLNL/axom
opened
Follow-up tasks for TPL CI improvements
CI TPL Testing maintenance
@keithhealy has significantly improved our CI testing by adding docker images with pre-built TPLs (#155 ) This issue tracks some unresolved tasks: - [ ] Build ``RAJA`` directly using our uberenv. @keithhealy ran into trouble building ``RAJA`` through docker and our current dockerfile's are building ``RAJA`` separately. - [ ] Generate TPLs for our ``Windows`` config on azure (@agcapps is working on this) and re-enable ``sidre`` for this config - [ ] Add unit test to ``Windows`` config on azure - [ ] Generate TPL's for our ``OS X`` builds, and re-enable ``sidre`` for this config. - [ ] Ensure that azure marks a build as failing if it fails unit tests. In the current iteration, this does not appear to be the case (see: https://dev.azure.com/axom/axom/_build/results?buildId=877&view=ms.vss-test-web.build-test-results-tab , where the ``sidre_lulesh`` tests are failing on the gcc configurations, but azure/github says that all checks are passing) - [ ] Fix ``sidre_lulesh`` test on gcc configurations. Perhaps reducing the number of threads on the ctest invocation would help? E.g. change ``j10`` to ``j8`` (or lower) - [ ] Update clang configurations to clang 8 and clang 9, instead of clang 4, 5 and 6 (per @rhornung67's request) - [ ] Update clang compiler images to include a working gfortran (@davidbeckingsale ?) - [ ] Consider removing ``docker_`` prefix from docker host-configs, since they are already located in the ``docker/`` directory (per @gzagaris's request)
1.0
Follow-up tasks for TPL CI improvements - @keithhealy has significantly improved our CI testing by adding docker images with pre-built TPLs (#155 ) This issue tracks some unresolved tasks: - [ ] Build ``RAJA`` directly using our uberenv. @keithhealy ran into trouble building ``RAJA`` through docker and our current dockerfile's are building ``RAJA`` separately. - [ ] Generate TPLs for our ``Windows`` config on azure (@agcapps is working on this) and re-enable ``sidre`` for this config - [ ] Add unit test to ``Windows`` config on azure - [ ] Generate TPL's for our ``OS X`` builds, and re-enable ``sidre`` for this config. - [ ] Ensure that azure marks a build as failing if it fails unit tests. In the current iteration, this does not appear to be the case (see: https://dev.azure.com/axom/axom/_build/results?buildId=877&view=ms.vss-test-web.build-test-results-tab , where the ``sidre_lulesh`` tests are failing on the gcc configurations, but azure/github says that all checks are passing) - [ ] Fix ``sidre_lulesh`` test on gcc configurations. Perhaps reducing the number of threads on the ctest invocation would help? E.g. change ``j10`` to ``j8`` (or lower) - [ ] Update clang configurations to clang 8 and clang 9, instead of clang 4, 5 and 6 (per @rhornung67's request) - [ ] Update clang compiler images to include a working gfortran (@davidbeckingsale ?) - [ ] Consider removing ``docker_`` prefix from docker host-configs, since they are already located in the ``docker/`` directory (per @gzagaris's request)
non_process
follow up tasks for tpl ci improvements keithhealy has significantly improved our ci testing by adding docker images with pre built tpls this issue tracks some unresolved tasks build raja directly using our uberenv keithhealy ran into trouble building raja through docker and our current dockerfile s are building raja separately generate tpls for our windows config on azure agcapps is working on this and re enable sidre for this config add unit test to windows config on azure generate tpl s for our os x builds and re enable sidre for this config ensure that azure marks a build as failing if it fails unit tests in the current iteration this does not appear to be the case see where the sidre lulesh tests are failing on the gcc configurations but azure github says that all checks are passing fix sidre lulesh test on gcc configurations perhaps reducing the number of threads on the ctest invocation would help e g change to or lower update clang configurations to clang and clang instead of clang and per s request update clang compiler images to include a working gfortran davidbeckingsale consider removing docker prefix from docker host configs since they are already located in the docker directory per gzagaris s request
0
212,483
16,453,786,757
IssuesEvent
2021-05-21 09:38:40
minova-afis/aero.minova.rcp
https://api.github.com/repos/minova-afis/aero.minova.rcp
closed
Release 12.X.17 unter MacOS
test
Getestet mit Server xxxxxx # Testprotokoll unter macOS ## Anmeldung - [x] Anmeldung an den Server mittles Default - Profil - [x] Anmeldung an den Server durch manuelles eintragen der Anmeldedaten - [x] Wiederholtes Anmelden mit einem Profil, bei dem das Passwort falsch eingetragen wurde und die Anwendung direkt darauf geschlossen wurde (siehe 2. Kommentar #388) ### Gespeicherte Suchkriterien werden geladen - [x] Funktioniert ## Keine Verbindung zum CAS möglich - [x] Die angezeigte Fehlermeldung enthält Details zum Fehler - [x] Beim Indexladen: Fehlermeldung und Knopf ist wieder aktivierbar - [x] Beim Öffnen einer Maske: Wenn die Maske schon einmal geladen wurde wird diese verwendet ## Workspace-Ordner - [x] Der aktuelle Workspace-Ordner wird in den Einstellungen angezeigt - [x] Der aktuelle Workspace-Ordner kann über die Einstellungen gelöscht werden - [x] Das Löschen eines Profils in der Login-Maske löscht auch den entsprechenden Workspace-Ordner ## Indexdruck <img width="823" alt="Bildschirmfoto 2021-04-20 um 22 27 50" src="https://user-images.githubusercontent.com/77741125/115459729-acb68880-a227-11eb-9ce3-49d04540c6d9.png"> - [x] Alle Spalten haben die selbe Reihenfolge wie auch in der Anwendung angezeigt (Reihenfolge kann verändert werden) ### Druckeinstellungen - [x] XML/XSL können erstellt werden (Workspaceordner -> PDF) - [x] Schriftgröße kann verändert werden - [x] Spaltenbreite kann optimiert werden, ansonsten wird Breite aus Index übernommen - [x] Leere Spalten können verborgen werden - [x] Gruppenspalten können verborgen werden - [x] Suchkriterien können angezeigt werden - [x] Interne Vorschau kann aktiviert werden ## Traverselistener - [x] Tab selektiert das nächste Feld (Wenn SelectAllControls **nicht** gesetzt ist) - [x] Tab selektiert alle Controls (Toolbar, Section usw.) (Wenn SelectAllControls gesetzt ist) - [x] Enter selektiert das **nächste leere** Pflichtfeld (Wenn EnterSelectsFirstRequired **nicht** gesetzt ist) - [x] Enter selektiert das **erste leere** Pflichtfeld (Wenn EnterSelectsFirstRequired gesetzt ist) - [x] Bei Enter in einer Auswahlbox bleibt man im **selben** Feld (Wenn LookupEnterSelectsNextRequired **nicht** gesetzt ist) - [x] Enter in einer Auswahlbox selektiert das **nächste leere** Pflichtfeld (Wenn LookupEnterSelectsNextRequired gesetzt ist, EnterSelectsFirstRequired ist egal) ## PerspectiveSwitcher - [x] Die Perspektive kann über das Menü oben geändert werden - [ ] Die Perspektive kann über die Leiste unten geändert werden - [x] Bei einem Neustart sind in der Leiste die gleichen Perspektiven wieder vorhanden - [ ] Perspektiven können über Rechtsklick geschlossen werden (inklusive der letzten) - [x] Es wird unterstützt, dass Masken in der application.mdi unterschiedliche Dateinamen und IDs haben (siehe #487) # Tests für CTS VG Eibelstadt ## Maske Anruf <img width="706" alt="Bildschirmfoto 2021-04-20 um 22 29 07" src="https://user-images.githubusercontent.com/77741125/115459886-da9bcd00-a227-11eb-806c-a44b78983c75.png"> - [x] Erfassung eines Anrufs mittels Eingabe einer neuen Testperson - [x] Erfassung eines Anrufs mittels bereits existierender Testperson - [x] Gleichzeitiges erfassen von Terminen, wenn es 2 Slots gibt (Keiner vorher belegt) - [x] Gleichzeitiges erfassen von Terminen, wenn es 2 Slots gibt (Einer vorher belegt) -> Einer funktioniert, eine Fehlermeldung <img width="574" alt="Bildschirmfoto 2021-04-20 um 22 35 35" src="https://user-images.githubusercontent.com/77741125/115460655-cf956c80-a228-11eb-8342-5fb660745e56.png"> - [x] Testtermin der einer Person zugeordnet wurde, darf keiner anderen Person zugeordnet werden (überschreiben) (Zuordnung nehmen, und einer anderen Person zuordnen (Lookup)) - [x] Daten einer zugeordneten Testperson sollen geändert werden. Zugeordneter Termin bleibt bestehen (Die Angaben der Testperson werden aktualisiert, aber der Termin bleibt bestehen) ## Maske Testperson - [x] Passwort wird nicht im Klartext angezeigt - [x] Passwort der Testperson kann nicht geändert werden - [x] Passwort der Testperson kann zurückgesetzt werden ## Maske Anmeldung <img width="679" alt="Bildschirmfoto 2021-04-20 um 22 33 37" src="https://user-images.githubusercontent.com/77741125/115460366-7a595b00-a228-11eb-92df-a0ad2f97355f.png"> ### E-Mail Versand - [x] positives Testergebnis an Testperson - [x] positives Testergebnis an Teststrecke - [x] negatives Testergebnis an Testperson - [x] negatives Testergebnis an Teststrecke ### Detaildruck (Maske Anmeldung) - [x] Funktioniert mit positiv - [x] Funktioniert mit negativ
1.0
Release 12.X.17 unter MacOS - Getestet mit Server xxxxxx # Testprotokoll unter macOS ## Anmeldung - [x] Anmeldung an den Server mittles Default - Profil - [x] Anmeldung an den Server durch manuelles eintragen der Anmeldedaten - [x] Wiederholtes Anmelden mit einem Profil, bei dem das Passwort falsch eingetragen wurde und die Anwendung direkt darauf geschlossen wurde (siehe 2. Kommentar #388) ### Gespeicherte Suchkriterien werden geladen - [x] Funktioniert ## Keine Verbindung zum CAS möglich - [x] Die angezeigte Fehlermeldung enthält Details zum Fehler - [x] Beim Indexladen: Fehlermeldung und Knopf ist wieder aktivierbar - [x] Beim Öffnen einer Maske: Wenn die Maske schon einmal geladen wurde wird diese verwendet ## Workspace-Ordner - [x] Der aktuelle Workspace-Ordner wird in den Einstellungen angezeigt - [x] Der aktuelle Workspace-Ordner kann über die Einstellungen gelöscht werden - [x] Das Löschen eines Profils in der Login-Maske löscht auch den entsprechenden Workspace-Ordner ## Indexdruck <img width="823" alt="Bildschirmfoto 2021-04-20 um 22 27 50" src="https://user-images.githubusercontent.com/77741125/115459729-acb68880-a227-11eb-9ce3-49d04540c6d9.png"> - [x] Alle Spalten haben die selbe Reihenfolge wie auch in der Anwendung angezeigt (Reihenfolge kann verändert werden) ### Druckeinstellungen - [x] XML/XSL können erstellt werden (Workspaceordner -> PDF) - [x] Schriftgröße kann verändert werden - [x] Spaltenbreite kann optimiert werden, ansonsten wird Breite aus Index übernommen - [x] Leere Spalten können verborgen werden - [x] Gruppenspalten können verborgen werden - [x] Suchkriterien können angezeigt werden - [x] Interne Vorschau kann aktiviert werden ## Traverselistener - [x] Tab selektiert das nächste Feld (Wenn SelectAllControls **nicht** gesetzt ist) - [x] Tab selektiert alle Controls (Toolbar, Section usw.) (Wenn SelectAllControls gesetzt ist) - [x] Enter selektiert das **nächste leere** Pflichtfeld (Wenn EnterSelectsFirstRequired **nicht** gesetzt ist) - [x] Enter selektiert das **erste leere** Pflichtfeld (Wenn EnterSelectsFirstRequired gesetzt ist) - [x] Bei Enter in einer Auswahlbox bleibt man im **selben** Feld (Wenn LookupEnterSelectsNextRequired **nicht** gesetzt ist) - [x] Enter in einer Auswahlbox selektiert das **nächste leere** Pflichtfeld (Wenn LookupEnterSelectsNextRequired gesetzt ist, EnterSelectsFirstRequired ist egal) ## PerspectiveSwitcher - [x] Die Perspektive kann über das Menü oben geändert werden - [ ] Die Perspektive kann über die Leiste unten geändert werden - [x] Bei einem Neustart sind in der Leiste die gleichen Perspektiven wieder vorhanden - [ ] Perspektiven können über Rechtsklick geschlossen werden (inklusive der letzten) - [x] Es wird unterstützt, dass Masken in der application.mdi unterschiedliche Dateinamen und IDs haben (siehe #487) # Tests für CTS VG Eibelstadt ## Maske Anruf <img width="706" alt="Bildschirmfoto 2021-04-20 um 22 29 07" src="https://user-images.githubusercontent.com/77741125/115459886-da9bcd00-a227-11eb-806c-a44b78983c75.png"> - [x] Erfassung eines Anrufs mittels Eingabe einer neuen Testperson - [x] Erfassung eines Anrufs mittels bereits existierender Testperson - [x] Gleichzeitiges erfassen von Terminen, wenn es 2 Slots gibt (Keiner vorher belegt) - [x] Gleichzeitiges erfassen von Terminen, wenn es 2 Slots gibt (Einer vorher belegt) -> Einer funktioniert, eine Fehlermeldung <img width="574" alt="Bildschirmfoto 2021-04-20 um 22 35 35" src="https://user-images.githubusercontent.com/77741125/115460655-cf956c80-a228-11eb-8342-5fb660745e56.png"> - [x] Testtermin der einer Person zugeordnet wurde, darf keiner anderen Person zugeordnet werden (überschreiben) (Zuordnung nehmen, und einer anderen Person zuordnen (Lookup)) - [x] Daten einer zugeordneten Testperson sollen geändert werden. Zugeordneter Termin bleibt bestehen (Die Angaben der Testperson werden aktualisiert, aber der Termin bleibt bestehen) ## Maske Testperson - [x] Passwort wird nicht im Klartext angezeigt - [x] Passwort der Testperson kann nicht geändert werden - [x] Passwort der Testperson kann zurückgesetzt werden ## Maske Anmeldung <img width="679" alt="Bildschirmfoto 2021-04-20 um 22 33 37" src="https://user-images.githubusercontent.com/77741125/115460366-7a595b00-a228-11eb-92df-a0ad2f97355f.png"> ### E-Mail Versand - [x] positives Testergebnis an Testperson - [x] positives Testergebnis an Teststrecke - [x] negatives Testergebnis an Testperson - [x] negatives Testergebnis an Teststrecke ### Detaildruck (Maske Anmeldung) - [x] Funktioniert mit positiv - [x] Funktioniert mit negativ
non_process
release x unter macos getestet mit server xxxxxx testprotokoll unter macos anmeldung anmeldung an den server mittles default profil anmeldung an den server durch manuelles eintragen der anmeldedaten wiederholtes anmelden mit einem profil bei dem das passwort falsch eingetragen wurde und die anwendung direkt darauf geschlossen wurde siehe kommentar gespeicherte suchkriterien werden geladen funktioniert keine verbindung zum cas möglich die angezeigte fehlermeldung enthält details zum fehler beim indexladen fehlermeldung und knopf ist wieder aktivierbar beim öffnen einer maske wenn die maske schon einmal geladen wurde wird diese verwendet workspace ordner der aktuelle workspace ordner wird in den einstellungen angezeigt der aktuelle workspace ordner kann über die einstellungen gelöscht werden das löschen eines profils in der login maske löscht auch den entsprechenden workspace ordner indexdruck img width alt bildschirmfoto um src alle spalten haben die selbe reihenfolge wie auch in der anwendung angezeigt reihenfolge kann verändert werden druckeinstellungen xml xsl können erstellt werden workspaceordner pdf schriftgröße kann verändert werden spaltenbreite kann optimiert werden ansonsten wird breite aus index übernommen leere spalten können verborgen werden gruppenspalten können verborgen werden suchkriterien können angezeigt werden interne vorschau kann aktiviert werden traverselistener tab selektiert das nächste feld wenn selectallcontrols nicht gesetzt ist tab selektiert alle controls toolbar section usw wenn selectallcontrols gesetzt ist enter selektiert das nächste leere pflichtfeld wenn enterselectsfirstrequired nicht gesetzt ist enter selektiert das erste leere pflichtfeld wenn enterselectsfirstrequired gesetzt ist bei enter in einer auswahlbox bleibt man im selben feld wenn lookupenterselectsnextrequired nicht gesetzt ist enter in einer auswahlbox selektiert das nächste leere pflichtfeld wenn lookupenterselectsnextrequired gesetzt ist enterselectsfirstrequired ist egal perspectiveswitcher die perspektive kann über das menü oben geändert werden die perspektive kann über die leiste unten geändert werden bei einem neustart sind in der leiste die gleichen perspektiven wieder vorhanden perspektiven können über rechtsklick geschlossen werden inklusive der letzten es wird unterstützt dass masken in der application mdi unterschiedliche dateinamen und ids haben siehe tests für cts vg eibelstadt maske anruf img width alt bildschirmfoto um src erfassung eines anrufs mittels eingabe einer neuen testperson erfassung eines anrufs mittels bereits existierender testperson gleichzeitiges erfassen von terminen wenn es slots gibt keiner vorher belegt gleichzeitiges erfassen von terminen wenn es slots gibt einer vorher belegt einer funktioniert eine fehlermeldung img width alt bildschirmfoto um src testtermin der einer person zugeordnet wurde darf keiner anderen person zugeordnet werden überschreiben zuordnung nehmen und einer anderen person zuordnen lookup daten einer zugeordneten testperson sollen geändert werden zugeordneter termin bleibt bestehen die angaben der testperson werden aktualisiert aber der termin bleibt bestehen maske testperson passwort wird nicht im klartext angezeigt passwort der testperson kann nicht geändert werden passwort der testperson kann zurückgesetzt werden maske anmeldung img width alt bildschirmfoto um src e mail versand positives testergebnis an testperson positives testergebnis an teststrecke negatives testergebnis an testperson negatives testergebnis an teststrecke detaildruck maske anmeldung funktioniert mit positiv funktioniert mit negativ
0
10,257
13,110,241,490
IssuesEvent
2020-08-04 20:16:16
googleapis/code-suggester
https://api.github.com/repos/googleapis/code-suggester
closed
Setup npm releases
type: process
- [ ] Have document detailing release notes i.e. changelog - [ ] Auto-deployment based on GitHub commit comments - [ ] Ensure auto-updated tags - [ ] Ensure auto-updating of docs that reference any versioning ### Description Create an automated npm release with auto-updating docs, tags, etc.
1.0
Setup npm releases - - [ ] Have document detailing release notes i.e. changelog - [ ] Auto-deployment based on GitHub commit comments - [ ] Ensure auto-updated tags - [ ] Ensure auto-updating of docs that reference any versioning ### Description Create an automated npm release with auto-updating docs, tags, etc.
process
setup npm releases have document detailing release notes i e changelog auto deployment based on github commit comments ensure auto updated tags ensure auto updating of docs that reference any versioning description create an automated npm release with auto updating docs tags etc
1
248,414
21,016,861,454
IssuesEvent
2022-03-30 11:51:49
Uuvana-Studios/longvinter-windows-client
https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client
opened
can't type password
Bug Not Tested
**Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See an error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. ![unknown (5)](https://user-images.githubusercontent.com/100501981/160828236-1b67a6fd-1319-472b-a47a-e1d87ac757df.png) **Desktop (please complete the following information):** - OS: [e.g. Windows] - Game Version [e.g. 1.0] - Steam Version [e.g. 1.0] **Additional context** Add any other context about the problem here.
1.0
can't type password - **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See an error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. ![unknown (5)](https://user-images.githubusercontent.com/100501981/160828236-1b67a6fd-1319-472b-a47a-e1d87ac757df.png) **Desktop (please complete the following information):** - OS: [e.g. Windows] - Game Version [e.g. 1.0] - Steam Version [e.g. 1.0] **Additional context** Add any other context about the problem here.
non_process
can t type password describe the bug a clear and concise description of what the bug is to reproduce steps to reproduce the behavior go to click on scroll down to see an error expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem desktop please complete the following information os game version steam version additional context add any other context about the problem here
0
117,690
17,512,680,064
IssuesEvent
2021-08-11 01:05:04
harrinry/pulsar
https://api.github.com/repos/harrinry/pulsar
opened
CVE-2019-20330 (High) detected in jackson-databind-2.8.11.4.jar, jackson-databind-2.6.5.jar
security vulnerability
## CVE-2019-20330 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.11.4.jar</b>, <b>jackson-databind-2.6.5.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.8.11.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: pulsar/pulsar-sql/presto-distribution/pom.xml</p> <p>Path to vulnerable library: 0150316_LVRAMP/downloadResource_AEDNMT/20210810150945/jackson-databind-2.8.11.4.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.11.4.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.6.5.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: pulsar/examples/spark/pom.xml</p> <p>Path to vulnerable library: 0150316_LVRAMP/downloadResource_AEDNMT/20210810150943/jackson-databind-2.6.5.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.6.5.jar** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking. <p>Publish Date: 2020-01-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330>CVE-2019-20330</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2526">https://github.com/FasterXML/jackson-databind/issues/2526</a></p> <p>Release Date: 2020-01-03</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.11.4","packageFilePaths":["/pulsar-sql/presto-distribution/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.11.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.5","packageFilePaths":["/examples/spark/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.6.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-20330","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-20330 (High) detected in jackson-databind-2.8.11.4.jar, jackson-databind-2.6.5.jar - ## CVE-2019-20330 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.11.4.jar</b>, <b>jackson-databind-2.6.5.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.8.11.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: pulsar/pulsar-sql/presto-distribution/pom.xml</p> <p>Path to vulnerable library: 0150316_LVRAMP/downloadResource_AEDNMT/20210810150945/jackson-databind-2.8.11.4.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.11.4.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.6.5.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: pulsar/examples/spark/pom.xml</p> <p>Path to vulnerable library: 0150316_LVRAMP/downloadResource_AEDNMT/20210810150943/jackson-databind-2.6.5.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.6.5.jar** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking. <p>Publish Date: 2020-01-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330>CVE-2019-20330</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2526">https://github.com/FasterXML/jackson-databind/issues/2526</a></p> <p>Release Date: 2020-01-03</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.11.4","packageFilePaths":["/pulsar-sql/presto-distribution/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.11.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.5","packageFilePaths":["/examples/spark/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.6.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.7.9.7,2.8.11.5,2.9.10.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-20330","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.2 lacks certain net.sf.ehcache blocking.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20330","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in jackson databind jar jackson databind jar cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pulsar pulsar sql presto distribution pom xml path to vulnerable library lvramp downloadresource aednmt jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pulsar examples spark pom xml path to vulnerable library lvramp downloadresource aednmt jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in base branch master vulnerability details fasterxml jackson databind x before lacks certain net sf ehcache blocking publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before lacks certain net sf ehcache blocking vulnerabilityurl
0
3
2,490,629,164
IssuesEvent
2015-01-02 17:45:31
tinkerpop/tinkerpop3
https://api.github.com/repos/tinkerpop/tinkerpop3
closed
On the concept of "step modulators".
enhancement process
We have opened up a can of worms with `as()` and `by()`. I call these "step modulators" in that they are not steps in themselves, but effect the step that came previous. For instance: . `v.out().as('a')`: The `VertexStep.setLabel()` is called. (`Step` interface) . `v.out().groupCount().by('name')`: The `GroupCountStep.addFunction()` is called. (`FunctionAcceptor` interface) The `by()`-modulator greatly reduce the complexity of the `GraphTraversal` API because we were able to remove various method overloadings and moreover, we reduced a massive pain in the ass around var args and method disambiguation issues. Sweet. Can we use the concept of "step modulators" in other places? Please see #415 and #411. Here are some examples of new modulators and their application. There is really nothing new here, just trying to aggregate it into one ticket and to see if this is a viable path forward. ```groovy g.V.has('age').between(30,32)... g.V.has('name').equal('marko')... g.V.repeat(g.of().out('knows')).loops(3)... g.V.repeat(g.of().out('knows')).until{it == v}... g.V.repeat(g.of().out('knows')).loops(3).emit()... g.V.repeat(g.of().out('knows')).until(label,'executive').emit(label,within,['secretary','engineer'])... ... ``` In essence `between()`, `equal()`, `loops()`, `until()`, `emit()` are modulators and not actual steps. Its a way to parameterize the respective "true step" without having to have a bunch of method overloadings -- and provides a name to parameters (not just via JavaDoc). Are there other steps that can benefit from such "modulators"? Is this a path worth going down? @dkuppitz @mbroecheler @BrynCooke @joshsh @rjbriody
1.0
On the concept of "step modulators". - We have opened up a can of worms with `as()` and `by()`. I call these "step modulators" in that they are not steps in themselves, but effect the step that came previous. For instance: . `v.out().as('a')`: The `VertexStep.setLabel()` is called. (`Step` interface) . `v.out().groupCount().by('name')`: The `GroupCountStep.addFunction()` is called. (`FunctionAcceptor` interface) The `by()`-modulator greatly reduce the complexity of the `GraphTraversal` API because we were able to remove various method overloadings and moreover, we reduced a massive pain in the ass around var args and method disambiguation issues. Sweet. Can we use the concept of "step modulators" in other places? Please see #415 and #411. Here are some examples of new modulators and their application. There is really nothing new here, just trying to aggregate it into one ticket and to see if this is a viable path forward. ```groovy g.V.has('age').between(30,32)... g.V.has('name').equal('marko')... g.V.repeat(g.of().out('knows')).loops(3)... g.V.repeat(g.of().out('knows')).until{it == v}... g.V.repeat(g.of().out('knows')).loops(3).emit()... g.V.repeat(g.of().out('knows')).until(label,'executive').emit(label,within,['secretary','engineer'])... ... ``` In essence `between()`, `equal()`, `loops()`, `until()`, `emit()` are modulators and not actual steps. Its a way to parameterize the respective "true step" without having to have a bunch of method overloadings -- and provides a name to parameters (not just via JavaDoc). Are there other steps that can benefit from such "modulators"? Is this a path worth going down? @dkuppitz @mbroecheler @BrynCooke @joshsh @rjbriody
process
on the concept of step modulators we have opened up a can of worms with as and by i call these step modulators in that they are not steps in themselves but effect the step that came previous for instance v out as a the vertexstep setlabel is called step interface v out groupcount by name the groupcountstep addfunction is called functionacceptor interface the by modulator greatly reduce the complexity of the graphtraversal api because we were able to remove various method overloadings and moreover we reduced a massive pain in the ass around var args and method disambiguation issues sweet can we use the concept of step modulators in other places please see and here are some examples of new modulators and their application there is really nothing new here just trying to aggregate it into one ticket and to see if this is a viable path forward groovy g v has age between g v has name equal marko g v repeat g of out knows loops g v repeat g of out knows until it v g v repeat g of out knows loops emit g v repeat g of out knows until label executive emit label within in essence between equal loops until emit are modulators and not actual steps its a way to parameterize the respective true step without having to have a bunch of method overloadings and provides a name to parameters not just via javadoc are there other steps that can benefit from such modulators is this a path worth going down dkuppitz mbroecheler bryncooke joshsh rjbriody
1
143,917
11,583,958,778
IssuesEvent
2020-02-22 14:32:01
marzer/tomlplusplus
https://api.github.com/repos/marzer/tomlplusplus
closed
Add node_view::value_or()
enhancement requires tests
To turn this: ```cpp auto fval = tbl[key][0].as_floating_point() ? **tbl[key][0].as_floating_point() : 0.0f ``` Into this: ```cpp auto fval = tbl[key][0].value_or(0.0f); ```
1.0
Add node_view::value_or() - To turn this: ```cpp auto fval = tbl[key][0].as_floating_point() ? **tbl[key][0].as_floating_point() : 0.0f ``` Into this: ```cpp auto fval = tbl[key][0].value_or(0.0f); ```
non_process
add node view value or to turn this cpp auto fval tbl as floating point tbl as floating point into this cpp auto fval tbl value or
0
92,515
3,871,867,653
IssuesEvent
2016-04-11 11:42:44
NRGI/rgi-assessment-tool
https://api.github.com/repos/NRGI/rgi-assessment-tool
closed
Dependant question model
enhancement in progress priority workflow
Some questions will need to only appear based on previously answered questinon. This is likely addressed by pending pull but will check.
1.0
Dependant question model - Some questions will need to only appear based on previously answered questinon. This is likely addressed by pending pull but will check.
non_process
dependant question model some questions will need to only appear based on previously answered questinon this is likely addressed by pending pull but will check
0
19,702
26,052,733,559
IssuesEvent
2022-12-22 20:35:37
MPMG-DCC-UFMG/C01
https://api.github.com/repos/MPMG-DCC-UFMG/C01
opened
Interface de passos com Vue.js - Passos com iteráveis, condições e "sources"
[1] Bug [2] Alta Prioridade [0] Desenvolvimento [3] Processamento Dinâmico
## Comportamento Esperado Os passos do sistema que incluem algum tipo de passo interno (iteráveis no caso de iterações, condições no caso de condicionais e "source" no caso da atribuição) devem funcionar corretamente. A interface deve disponibilizar um select para escolher o passo interno, nesse caso, e exibir também os parâmetros correspondentes, permitindo a inclusão de parâmetros opcionais caso seja aplicável. ## Comportamento Atual Na versão atual da interface com Vue não é possível configurar os passos internos na interface. Uma implementação já foi iniciada na branch `issue-882`: o componente Step possui uma referência para o nome e os argumentos do passo interno, caso exista. As computed properties referentes a esse passo interno também foram implementadas. Falta adicionar os campos referentes ao passo interno no template do componente Step. Pode ser necessária uma refatoração para abstrair as propriedades em comum de passos "externos" e "internos" e apenas adicionar duas instâncias dessa abstração no componente Step, mas isso pode ser tomado como uma melhoria futura. ## Sistema Branch `issue-882`.
1.0
Interface de passos com Vue.js - Passos com iteráveis, condições e "sources" - ## Comportamento Esperado Os passos do sistema que incluem algum tipo de passo interno (iteráveis no caso de iterações, condições no caso de condicionais e "source" no caso da atribuição) devem funcionar corretamente. A interface deve disponibilizar um select para escolher o passo interno, nesse caso, e exibir também os parâmetros correspondentes, permitindo a inclusão de parâmetros opcionais caso seja aplicável. ## Comportamento Atual Na versão atual da interface com Vue não é possível configurar os passos internos na interface. Uma implementação já foi iniciada na branch `issue-882`: o componente Step possui uma referência para o nome e os argumentos do passo interno, caso exista. As computed properties referentes a esse passo interno também foram implementadas. Falta adicionar os campos referentes ao passo interno no template do componente Step. Pode ser necessária uma refatoração para abstrair as propriedades em comum de passos "externos" e "internos" e apenas adicionar duas instâncias dessa abstração no componente Step, mas isso pode ser tomado como uma melhoria futura. ## Sistema Branch `issue-882`.
process
interface de passos com vue js passos com iteráveis condições e sources comportamento esperado os passos do sistema que incluem algum tipo de passo interno iteráveis no caso de iterações condições no caso de condicionais e source no caso da atribuição devem funcionar corretamente a interface deve disponibilizar um select para escolher o passo interno nesse caso e exibir também os parâmetros correspondentes permitindo a inclusão de parâmetros opcionais caso seja aplicável comportamento atual na versão atual da interface com vue não é possível configurar os passos internos na interface uma implementação já foi iniciada na branch issue o componente step possui uma referência para o nome e os argumentos do passo interno caso exista as computed properties referentes a esse passo interno também foram implementadas falta adicionar os campos referentes ao passo interno no template do componente step pode ser necessária uma refatoração para abstrair as propriedades em comum de passos externos e internos e apenas adicionar duas instâncias dessa abstração no componente step mas isso pode ser tomado como uma melhoria futura sistema branch issue
1
3,637
6,670,937,851
IssuesEvent
2017-10-04 03:28:11
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
opened
Publish public key used to verify release artifact checksum signatures
category: dev & admin process discussion type: admin task
This issue is a follow-up to #2463, which implemented signing of the release artifact checksum files to ensure their integrity. In order for users to verify the signatures, they need access to the public key corresponding to the private key used to create the signatures. The purpose of this issue is to discuss how we should distribute that public key. I'll start off by proposing two possible solutions: <hr> #### 1. Publish via web page In this solution, we would embed the public key on a web page along with its fingerprint. The web page should be served via HTTPS to ensure the public key and fingerprint can't be altered by a MITM. An example of this solution for another project can be found [here](https://syncthing.net/security.html). I was going to originally suggest we use the [Download page](http://www.triplea-game.org/download/) of the website, but I keep forgetting we haven't yet gotten HTTPS working for this site. An alternative would be to place the public key and fingerprint in the wiki or README and simply place a link to the appropriate location on the Download page. #### 2. Publish via keyserver In this solution, we would simply publish the public key via a GPG keyserver. An example of this solution for another project can be found [here](https://github.com/nodejs/node/blob/master/README.md#verifying-binaries). We would still have to provide instructions for importing the key. Again, probably via a link on the Download page to the wiki or README. <hr> I'm leaning towards the following solution: * Start by publishing the public key via (1) and eventually move to (2) simply because there are fewer moving parts. * Add a "Verifying releases" section to the README. * Include the ascii-armored public key. * Include the public key fingerprint. * Include instructions for importing the public key into GPG. * Include instructions for verifying checksum file integrity using GPG. * Include instructions for verifying artifact file integrity using checksums. * Provide a link on the Download page to the "Verifying releases" section in the README (via an HTTPS URL). Any other ideas?
1.0
Publish public key used to verify release artifact checksum signatures - This issue is a follow-up to #2463, which implemented signing of the release artifact checksum files to ensure their integrity. In order for users to verify the signatures, they need access to the public key corresponding to the private key used to create the signatures. The purpose of this issue is to discuss how we should distribute that public key. I'll start off by proposing two possible solutions: <hr> #### 1. Publish via web page In this solution, we would embed the public key on a web page along with its fingerprint. The web page should be served via HTTPS to ensure the public key and fingerprint can't be altered by a MITM. An example of this solution for another project can be found [here](https://syncthing.net/security.html). I was going to originally suggest we use the [Download page](http://www.triplea-game.org/download/) of the website, but I keep forgetting we haven't yet gotten HTTPS working for this site. An alternative would be to place the public key and fingerprint in the wiki or README and simply place a link to the appropriate location on the Download page. #### 2. Publish via keyserver In this solution, we would simply publish the public key via a GPG keyserver. An example of this solution for another project can be found [here](https://github.com/nodejs/node/blob/master/README.md#verifying-binaries). We would still have to provide instructions for importing the key. Again, probably via a link on the Download page to the wiki or README. <hr> I'm leaning towards the following solution: * Start by publishing the public key via (1) and eventually move to (2) simply because there are fewer moving parts. * Add a "Verifying releases" section to the README. * Include the ascii-armored public key. * Include the public key fingerprint. * Include instructions for importing the public key into GPG. * Include instructions for verifying checksum file integrity using GPG. * Include instructions for verifying artifact file integrity using checksums. * Provide a link on the Download page to the "Verifying releases" section in the README (via an HTTPS URL). Any other ideas?
process
publish public key used to verify release artifact checksum signatures this issue is a follow up to which implemented signing of the release artifact checksum files to ensure their integrity in order for users to verify the signatures they need access to the public key corresponding to the private key used to create the signatures the purpose of this issue is to discuss how we should distribute that public key i ll start off by proposing two possible solutions publish via web page in this solution we would embed the public key on a web page along with its fingerprint the web page should be served via https to ensure the public key and fingerprint can t be altered by a mitm an example of this solution for another project can be found i was going to originally suggest we use the of the website but i keep forgetting we haven t yet gotten https working for this site an alternative would be to place the public key and fingerprint in the wiki or readme and simply place a link to the appropriate location on the download page publish via keyserver in this solution we would simply publish the public key via a gpg keyserver an example of this solution for another project can be found we would still have to provide instructions for importing the key again probably via a link on the download page to the wiki or readme i m leaning towards the following solution start by publishing the public key via and eventually move to simply because there are fewer moving parts add a verifying releases section to the readme include the ascii armored public key include the public key fingerprint include instructions for importing the public key into gpg include instructions for verifying checksum file integrity using gpg include instructions for verifying artifact file integrity using checksums provide a link on the download page to the verifying releases section in the readme via an https url any other ideas
1
78,301
22,193,320,231
IssuesEvent
2022-06-07 02:59:12
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
tensorflow cpu module's speed lower on windows than linux
stat:awaiting response type:build/install type:support stalled 1.4.0
System information - Have I written custom code (as opposed to using a stock example script provided in TensorFlow):yes - OS Platform and Distribution (e.g., Linux Ubuntu 16.04):windows7 64bit and ubuntu 16.04 64bit - TensorFlow installed from (source or binary):build tensorflow source to shared lib - TensorFlow version (use command below):tensorflow v1.3.0 - Python version: 3.5 - Bazel version (if compiling from source):N/A - GCC/Compiler version (if compiling from source):N/A - CUDA/cuDNN version:N/A - GPU model and memory:N/A - Exact command to reproduce:N/A Describe the problem Training tensorflow module and detect faces both on windows7 and ubuntu 16.04, but it costs about twice time on windows7 than ubuntu16.04. So we want to know this issue is normal or not? And if it is normal, what's the reason? windows7 PC environment: CPU: Intel Core i3 2120 time: 80~160 ms ubuntu16.04 PC environment: CPU: Intel(R) Core(TM) i3-3220 CPU@3.30GHz time: 40~100 ms
1.0
tensorflow cpu module's speed lower on windows than linux - System information - Have I written custom code (as opposed to using a stock example script provided in TensorFlow):yes - OS Platform and Distribution (e.g., Linux Ubuntu 16.04):windows7 64bit and ubuntu 16.04 64bit - TensorFlow installed from (source or binary):build tensorflow source to shared lib - TensorFlow version (use command below):tensorflow v1.3.0 - Python version: 3.5 - Bazel version (if compiling from source):N/A - GCC/Compiler version (if compiling from source):N/A - CUDA/cuDNN version:N/A - GPU model and memory:N/A - Exact command to reproduce:N/A Describe the problem Training tensorflow module and detect faces both on windows7 and ubuntu 16.04, but it costs about twice time on windows7 than ubuntu16.04. So we want to know this issue is normal or not? And if it is normal, what's the reason? windows7 PC environment: CPU: Intel Core i3 2120 time: 80~160 ms ubuntu16.04 PC environment: CPU: Intel(R) Core(TM) i3-3220 CPU@3.30GHz time: 40~100 ms
non_process
tensorflow cpu module s speed lower on windows than linux system information have i written custom code as opposed to using a stock example script provided in tensorflow yes os platform and distribution e g linux ubuntu and ubuntu tensorflow installed from source or binary build tensorflow source to shared lib tensorflow version use command below tensorflow python version bazel version if compiling from source n a gcc compiler version if compiling from source n a cuda cudnn version n a gpu model and memory n a exact command to reproduce n a describe the problem training tensorflow module and detect faces both on and ubuntu but it costs about twice time on than so we want to know this issue is normal or not and if it is normal what s the reason pc environment cpu intel core time ms pc environment cpu intel r core tm cpu time ms
0
409,609
27,745,305,205
IssuesEvent
2023-03-15 16:39:00
elmsln/issues
https://api.github.com/repos/elmsln/issues
opened
user context exercise: the everything solution
documentation 2025 super daemon
Not sure if to consider this CLI or what but here goes with the discussion just had. If we think of the UI as just a way of accessing things in the CLI, we can #1263 entice users to learn both and think with both mental models, effectively using the CLI / folder / action tree as a backdrop to learn the UI. Methods like the following can entice users the other direction, into using the UI to understand that there is a CLI: - Clicking print (a singular button) opens the daemon, defaulting the "slash" command default to be `/formats` and showing a listing of all items tagged as being part of that context - Clicking search button could open up and default the daemon context to "content", effectively eliminating the current custom search dialog by integrating it's capabilities into the daemon (almost there to be honest minus full text search) - Clicking help (which needs added to the UI) or whatever we call it, invokes the daemon and sets a `help` context - Opening the Daemon by default allows for searching across all contexts that are active (so a super list of all possible options) - Selection an option, if it has a UI parallel, needs to run #1263 to help users draw the connection between the two Additional things we could do that we can't now - in context / authoring you could hit enter, then / to call up an in-context version of the daemon which is defaulted to the `editor` context - in outline designer the "help" is created w/ the daemon using that context to default to relevant options - if we detect a new site and 1st access, default daemon open with logical 1st steps presented (a 1st time context) - if we detect a new site, and 1st access, and 1st time ever using the system, default daemon open with operations associated with learning the daemon and UI sides of things to understand the association and to get editing with a single click - #716 help / daemon could have the login button - #1277 #1246 #1254 - advanced / experimental / alpha functionality that can be enabled via this UI - importer context for multiple formats - export context for seeing all the ways to get the data back out of the system - #1274 What we need to make this possible - support for tutorial link to importers so ppl can see how / what's required - argument acceptance and backing out - #1263 - support for collapsed details per item - small mode boolean for css to support small, absolute positioning mode vs modal - ability to remote load options for contexts driven by system changes
1.0
user context exercise: the everything solution - Not sure if to consider this CLI or what but here goes with the discussion just had. If we think of the UI as just a way of accessing things in the CLI, we can #1263 entice users to learn both and think with both mental models, effectively using the CLI / folder / action tree as a backdrop to learn the UI. Methods like the following can entice users the other direction, into using the UI to understand that there is a CLI: - Clicking print (a singular button) opens the daemon, defaulting the "slash" command default to be `/formats` and showing a listing of all items tagged as being part of that context - Clicking search button could open up and default the daemon context to "content", effectively eliminating the current custom search dialog by integrating it's capabilities into the daemon (almost there to be honest minus full text search) - Clicking help (which needs added to the UI) or whatever we call it, invokes the daemon and sets a `help` context - Opening the Daemon by default allows for searching across all contexts that are active (so a super list of all possible options) - Selection an option, if it has a UI parallel, needs to run #1263 to help users draw the connection between the two Additional things we could do that we can't now - in context / authoring you could hit enter, then / to call up an in-context version of the daemon which is defaulted to the `editor` context - in outline designer the "help" is created w/ the daemon using that context to default to relevant options - if we detect a new site and 1st access, default daemon open with logical 1st steps presented (a 1st time context) - if we detect a new site, and 1st access, and 1st time ever using the system, default daemon open with operations associated with learning the daemon and UI sides of things to understand the association and to get editing with a single click - #716 help / daemon could have the login button - #1277 #1246 #1254 - advanced / experimental / alpha functionality that can be enabled via this UI - importer context for multiple formats - export context for seeing all the ways to get the data back out of the system - #1274 What we need to make this possible - support for tutorial link to importers so ppl can see how / what's required - argument acceptance and backing out - #1263 - support for collapsed details per item - small mode boolean for css to support small, absolute positioning mode vs modal - ability to remote load options for contexts driven by system changes
non_process
user context exercise the everything solution not sure if to consider this cli or what but here goes with the discussion just had if we think of the ui as just a way of accessing things in the cli we can entice users to learn both and think with both mental models effectively using the cli folder action tree as a backdrop to learn the ui methods like the following can entice users the other direction into using the ui to understand that there is a cli clicking print a singular button opens the daemon defaulting the slash command default to be formats and showing a listing of all items tagged as being part of that context clicking search button could open up and default the daemon context to content effectively eliminating the current custom search dialog by integrating it s capabilities into the daemon almost there to be honest minus full text search clicking help which needs added to the ui or whatever we call it invokes the daemon and sets a help context opening the daemon by default allows for searching across all contexts that are active so a super list of all possible options selection an option if it has a ui parallel needs to run to help users draw the connection between the two additional things we could do that we can t now in context authoring you could hit enter then to call up an in context version of the daemon which is defaulted to the editor context in outline designer the help is created w the daemon using that context to default to relevant options if we detect a new site and access default daemon open with logical steps presented a time context if we detect a new site and access and time ever using the system default daemon open with operations associated with learning the daemon and ui sides of things to understand the association and to get editing with a single click help daemon could have the login button advanced experimental alpha functionality that can be enabled via this ui importer context for multiple formats export context for seeing all the ways to get the data back out of the system what we need to make this possible support for tutorial link to importers so ppl can see how what s required argument acceptance and backing out support for collapsed details per item small mode boolean for css to support small absolute positioning mode vs modal ability to remote load options for contexts driven by system changes
0
15,156
18,908,732,187
IssuesEvent
2021-11-16 11:54:52
streamnative/pulsar-flink
https://api.github.com/repos/streamnative/pulsar-flink
reopened
Remove unused code in master branch
type/cleanup platform/data-processing
There are many dangling code (especially some new connector flies) in the repo that is not being used. It increases the barrier to fully understand things in this repo
1.0
Remove unused code in master branch - There are many dangling code (especially some new connector flies) in the repo that is not being used. It increases the barrier to fully understand things in this repo
process
remove unused code in master branch there are many dangling code especially some new connector flies in the repo that is not being used it increases the barrier to fully understand things in this repo
1
33,283
2,763,322,372
IssuesEvent
2015-04-29 08:29:43
ceylon/ceylon-spec
https://api.github.com/repos/ceylon/ceylon-spec
opened
lots of bugs typechecking native
bug high priority
These are the problems I noticed immediately, surely there are more: - All versions of the `native` dec are forced to have the exact same annotations, which is no good, for, e.g. the `doc` annotation. - This seems to be a catchall way to avoid directly checking stuff like `variable`, but if we want decent messages we _do_ have to directly check `variable` and perhaps others. - The upper bounds of a type parameter of a `native` implementation are never checked for consistency with the header. - The members of a `native` class are never checked at all! Currently I can write a `native` implementation class which simply doesn't implement the shared members of the header. Finally: - References to a `native` dec resolve to the last `native` implementation to be declared in the file, not to the `native` header.
1.0
lots of bugs typechecking native - These are the problems I noticed immediately, surely there are more: - All versions of the `native` dec are forced to have the exact same annotations, which is no good, for, e.g. the `doc` annotation. - This seems to be a catchall way to avoid directly checking stuff like `variable`, but if we want decent messages we _do_ have to directly check `variable` and perhaps others. - The upper bounds of a type parameter of a `native` implementation are never checked for consistency with the header. - The members of a `native` class are never checked at all! Currently I can write a `native` implementation class which simply doesn't implement the shared members of the header. Finally: - References to a `native` dec resolve to the last `native` implementation to be declared in the file, not to the `native` header.
non_process
lots of bugs typechecking native these are the problems i noticed immediately surely there are more all versions of the native dec are forced to have the exact same annotations which is no good for e g the doc annotation this seems to be a catchall way to avoid directly checking stuff like variable but if we want decent messages we do have to directly check variable and perhaps others the upper bounds of a type parameter of a native implementation are never checked for consistency with the header the members of a native class are never checked at all currently i can write a native implementation class which simply doesn t implement the shared members of the header finally references to a native dec resolve to the last native implementation to be declared in the file not to the native header
0
265,171
8,338,321,765
IssuesEvent
2018-09-28 14:00:25
edenlabllc/ehealth.api
https://api.github.com/repos/edenlabllc/ehealth.api
closed
Insert record into dictionary LEGAL_FORM, PROD, #J300
kind/support priority/medium status/wontfix
Прохання добавити до поточного словника "LEGAL_FORM" - КНП, тобто "КОМУНАЛЬНЕ НЕКОМЕРЦІЙНЕ ПІДПРИЄМСТВО", дякую.
1.0
Insert record into dictionary LEGAL_FORM, PROD, #J300 - Прохання добавити до поточного словника "LEGAL_FORM" - КНП, тобто "КОМУНАЛЬНЕ НЕКОМЕРЦІЙНЕ ПІДПРИЄМСТВО", дякую.
non_process
insert record into dictionary legal form prod прохання добавити до поточного словника legal form кнп тобто комунальне некомерційне підприємство дякую
0
11,688
14,542,938,667
IssuesEvent
2020-12-15 16:17:18
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
No indication of permission prerequisites.
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
I follow the guidance on the page and get the following error; Access denied. Dermot Canniffe needs Create permissions to perform the action. For more information, contact the Azure DevOps Server administrator. This, even though I'm a PCA in the organisation, and an owner and admin on the project in question. No indication where I can set this permission , whom to address this to, where to see a breakdown of what permissions I do have, etc. etc. Opaque permissions model. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77d95db6-9983-7346-d0eb-4b7443e4e252 * Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087 * Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops) * Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
No indication of permission prerequisites. - I follow the guidance on the page and get the following error; Access denied. Dermot Canniffe needs Create permissions to perform the action. For more information, contact the Azure DevOps Server administrator. This, even though I'm a PCA in the organisation, and an owner and admin on the project in question. No indication where I can set this permission , whom to address this to, where to see a breakdown of what permissions I do have, etc. etc. Opaque permissions model. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77d95db6-9983-7346-d0eb-4b7443e4e252 * Version Independent ID: 0a22cccc-318d-592f-d1ab-09ec01d88087 * Content: [Environment - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops) * Content Source: [docs/pipelines/process/environments.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/environments.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
no indication of permission prerequisites i follow the guidance on the page and get the following error access denied dermot canniffe needs create permissions to perform the action for more information contact the azure devops server administrator this even though i m a pca in the organisation and an owner and admin on the project in question no indication where i can set this permission whom to address this to where to see a breakdown of what permissions i do have etc etc opaque permissions model document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
191
2,505,173,333
IssuesEvent
2015-01-11 04:52:16
bitcoin/secp256k1
https://api.github.com/repos/bitcoin/secp256k1
opened
Docucmentation should provide example callers.
documentation
We've seen some people attempting to use the library calling into random internal functions. That should be harder to do now... but now that the basic signature interface is relatively safe to use (as such things go) there should be some easy copy and paste examples for correct usage.
1.0
Docucmentation should provide example callers. - We've seen some people attempting to use the library calling into random internal functions. That should be harder to do now... but now that the basic signature interface is relatively safe to use (as such things go) there should be some easy copy and paste examples for correct usage.
non_process
docucmentation should provide example callers we ve seen some people attempting to use the library calling into random internal functions that should be harder to do now but now that the basic signature interface is relatively safe to use as such things go there should be some easy copy and paste examples for correct usage
0
15,451
19,665,169,591
IssuesEvent
2022-01-10 21:33:15
GoogleCloudPlatform/microservices-demo
https://api.github.com/repos/GoogleCloudPlatform/microservices-demo
closed
cut a new 0.3.5 release from main, please
type: process priority: p2
### Describe the bug Some services (Payment, for sure) does not have options to disable google-specific tracing/profiler. Payment service is crashing if the cluster is not in GCP. ### To Reproduce start Payment service in a K8s cluster not on GCP ### Logs ``` /usr/src/app/node_modules/@google-cloud/profiler/build/src/index.js:120 throw new Error('Project ID must be specified in the configuration'); ^ Error: Project ID must be specified in the configuration at initConfigMetadata (/usr/src/app/node_modules/@google-cloud/profiler/build/src/index.js:120:15) at processTicksAndRejections (node:internal/process/task_queues:96:5) at runNextTicks (node:internal/process/task_queues:65:3) at listOnTimeout (node:internal/timers:526:9) at processTimers (node:internal/timers:500:7) at async createProfiler (/usr/src/app/node_modules/@google-cloud/profiler/build/src/index.js:158:26) at async Object.start (/usr/src/app/node_modules/@google-cloud/profiler/build/src/index.js:182:22) ``` ### Environment EKS cluster ### Additional context The code is already changed in the repo, you only need to cut a new release and rebuild the docker images ### Exposure bug is persistent and widespread
1.0
cut a new 0.3.5 release from main, please - ### Describe the bug Some services (Payment, for sure) does not have options to disable google-specific tracing/profiler. Payment service is crashing if the cluster is not in GCP. ### To Reproduce start Payment service in a K8s cluster not on GCP ### Logs ``` /usr/src/app/node_modules/@google-cloud/profiler/build/src/index.js:120 throw new Error('Project ID must be specified in the configuration'); ^ Error: Project ID must be specified in the configuration at initConfigMetadata (/usr/src/app/node_modules/@google-cloud/profiler/build/src/index.js:120:15) at processTicksAndRejections (node:internal/process/task_queues:96:5) at runNextTicks (node:internal/process/task_queues:65:3) at listOnTimeout (node:internal/timers:526:9) at processTimers (node:internal/timers:500:7) at async createProfiler (/usr/src/app/node_modules/@google-cloud/profiler/build/src/index.js:158:26) at async Object.start (/usr/src/app/node_modules/@google-cloud/profiler/build/src/index.js:182:22) ``` ### Environment EKS cluster ### Additional context The code is already changed in the repo, you only need to cut a new release and rebuild the docker images ### Exposure bug is persistent and widespread
process
cut a new release from main please describe the bug some services payment for sure does not have options to disable google specific tracing profiler payment service is crashing if the cluster is not in gcp to reproduce start payment service in a cluster not on gcp logs usr src app node modules google cloud profiler build src index js throw new error project id must be specified in the configuration error project id must be specified in the configuration at initconfigmetadata usr src app node modules google cloud profiler build src index js at processticksandrejections node internal process task queues at runnextticks node internal process task queues at listontimeout node internal timers at processtimers node internal timers at async createprofiler usr src app node modules google cloud profiler build src index js at async object start usr src app node modules google cloud profiler build src index js environment eks cluster additional context the code is already changed in the repo you only need to cut a new release and rebuild the docker images exposure bug is persistent and widespread
1
9,888
12,889,639,630
IssuesEvent
2020-07-13 14:49:27
zammad/zammad
https://api.github.com/repos/zammad/zammad
opened
Zammad can't import specific ISO-2022-JP mails
bug mail processing prioritized by payment verified
<!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓 Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: 3.4 * Installation method (source, package, ..): any * Operating system: any * Database + version: any * Elasticsearch version: any * Browser + version: any * Ticket-ID: #1077341 ### Expected behavior: Zammad imports ISO-2022-JP without issues. ### Actual behavior: In some special cases, Zammad can't import mails encoded with ISO-2022-JP. (you can find samples in above mentioned ticket IDs - anonymized mails usually no longer contain the issue, as the byte order is changed). In that situations Zammad logs the following: ``` "ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/aa7e19b4dfa0dc04a4b6b4b35cfff7ee.eml, please create an issue at https://github.com/zammad/zammad/issues" "ERROR: #<Encoding::InvalidByteSequenceError: \"%\" followed by \" \" on ISO-2022-JP>" Traceback (most recent call last): 35: from bin/rails:9:in `<main>' 34: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require' 33: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency' 32: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require' 31: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require' 30: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi' 29: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register' 28: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi' 27: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require' 26: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>' 25: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke' 24: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform' 23: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch' 22: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command' 21: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run' 20: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform' 19: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval' 18: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>' 17: from /opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails' 16: from /opt/zammad/app/models/channel/email_parser.rb:481:in `glob' 15: from /opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails' 14: from /opt/zammad/app/models/channel/email_parser.rb:117:in `process' 13: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:108:in `timeout' 12: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch' 11: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch' 10: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `block in catch' 9: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:93:in `block in timeout' 8: from /opt/zammad/app/models/channel/email_parser.rb:118:in `block in process' 7: from /opt/zammad/app/models/channel/email_parser.rb:139:in `_process' 6: from /opt/zammad/app/models/channel/email_parser.rb:81:in `parse' 5: from /opt/zammad/app/models/channel/email_parser.rb:508:in `force_parts_encoding_if_needed' 4: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `block in delegating_block' 3: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `each' 2: from /opt/zammad/app/models/channel/email_parser.rb:508:in `block in force_parts_encoding_if_needed' 1: from /opt/zammad/app/models/channel/email_parser.rb:515:in `force_single_part_encoding_if_needed' /opt/zammad/app/models/channel/email_parser.rb:515:in `encode': "%" followed by " " on ISO-2022-JP (Encoding::InvalidByteSequenceError) 22: from bin/rails:9:in `<main>' 21: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require' 20: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency' 19: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require' 18: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require' 17: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi' 16: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register' 15: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi' 14: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require' 13: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>' 12: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke' 11: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform' 10: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch' 9: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command' 8: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run' 7: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform' 6: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval' 5: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>' 4: from /opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails' 3: from /opt/zammad/app/models/channel/email_parser.rb:481:in `glob' 2: from /opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails' 1: from /opt/zammad/app/models/channel/email_parser.rb:115:in `process' /opt/zammad/app/models/channel/email_parser.rb:133:in `rescue in process': #<Encoding::InvalidByteSequenceError: "%" followed by " " on ISO-2022-JP> (RuntimeError) /opt/zammad/app/models/channel/email_parser.rb:515:in `encode' /opt/zammad/app/models/channel/email_parser.rb:515:in `force_single_part_encoding_if_needed' /opt/zammad/app/models/channel/email_parser.rb:508:in `block in force_parts_encoding_if_needed' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `each' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `block in delegating_block' /opt/zammad/app/models/channel/email_parser.rb:508:in `force_parts_encoding_if_needed' /opt/zammad/app/models/channel/email_parser.rb:81:in `parse' /opt/zammad/app/models/channel/email_parser.rb:139:in `_process' /opt/zammad/app/models/channel/email_parser.rb:118:in `block in process' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:93:in `block in timeout' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `block in catch' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:108:in `timeout' /opt/zammad/app/models/channel/email_parser.rb:117:in `process' /opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails' /opt/zammad/app/models/channel/email_parser.rb:481:in `glob' /opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform' /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run' /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command' /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>' /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require' /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi' /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register' /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi' /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require' /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require' /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency' /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require' bin/rails:9:in `<main>' ``` ### Steps to reproduce the behavior: * have a specific byte sequence of a ISO-2022-JP encoded mail * try to import it Yes I'm sure this is a bug and no feature request or a general question.
1.0
Zammad can't import specific ISO-2022-JP mails - <!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓 Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: 3.4 * Installation method (source, package, ..): any * Operating system: any * Database + version: any * Elasticsearch version: any * Browser + version: any * Ticket-ID: #1077341 ### Expected behavior: Zammad imports ISO-2022-JP without issues. ### Actual behavior: In some special cases, Zammad can't import mails encoded with ISO-2022-JP. (you can find samples in above mentioned ticket IDs - anonymized mails usually no longer contain the issue, as the byte order is changed). In that situations Zammad logs the following: ``` "ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/aa7e19b4dfa0dc04a4b6b4b35cfff7ee.eml, please create an issue at https://github.com/zammad/zammad/issues" "ERROR: #<Encoding::InvalidByteSequenceError: \"%\" followed by \" \" on ISO-2022-JP>" Traceback (most recent call last): 35: from bin/rails:9:in `<main>' 34: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require' 33: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency' 32: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require' 31: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require' 30: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi' 29: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register' 28: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi' 27: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require' 26: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>' 25: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke' 24: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform' 23: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch' 22: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command' 21: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run' 20: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform' 19: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval' 18: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>' 17: from /opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails' 16: from /opt/zammad/app/models/channel/email_parser.rb:481:in `glob' 15: from /opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails' 14: from /opt/zammad/app/models/channel/email_parser.rb:117:in `process' 13: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:108:in `timeout' 12: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch' 11: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch' 10: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `block in catch' 9: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:93:in `block in timeout' 8: from /opt/zammad/app/models/channel/email_parser.rb:118:in `block in process' 7: from /opt/zammad/app/models/channel/email_parser.rb:139:in `_process' 6: from /opt/zammad/app/models/channel/email_parser.rb:81:in `parse' 5: from /opt/zammad/app/models/channel/email_parser.rb:508:in `force_parts_encoding_if_needed' 4: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `block in delegating_block' 3: from /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `each' 2: from /opt/zammad/app/models/channel/email_parser.rb:508:in `block in force_parts_encoding_if_needed' 1: from /opt/zammad/app/models/channel/email_parser.rb:515:in `force_single_part_encoding_if_needed' /opt/zammad/app/models/channel/email_parser.rb:515:in `encode': "%" followed by " " on ISO-2022-JP (Encoding::InvalidByteSequenceError) 22: from bin/rails:9:in `<main>' 21: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require' 20: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency' 19: from /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require' 18: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require' 17: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi' 16: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register' 15: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi' 14: from /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require' 13: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>' 12: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke' 11: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform' 10: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch' 9: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command' 8: from /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run' 7: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform' 6: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval' 5: from /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>' 4: from /opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails' 3: from /opt/zammad/app/models/channel/email_parser.rb:481:in `glob' 2: from /opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails' 1: from /opt/zammad/app/models/channel/email_parser.rb:115:in `process' /opt/zammad/app/models/channel/email_parser.rb:133:in `rescue in process': #<Encoding::InvalidByteSequenceError: "%" followed by " " on ISO-2022-JP> (RuntimeError) /opt/zammad/app/models/channel/email_parser.rb:515:in `encode' /opt/zammad/app/models/channel/email_parser.rb:515:in `force_single_part_encoding_if_needed' /opt/zammad/app/models/channel/email_parser.rb:508:in `block in force_parts_encoding_if_needed' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `each' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/delegate.rb:349:in `block in delegating_block' /opt/zammad/app/models/channel/email_parser.rb:508:in `force_parts_encoding_if_needed' /opt/zammad/app/models/channel/email_parser.rb:81:in `parse' /opt/zammad/app/models/channel/email_parser.rb:139:in `_process' /opt/zammad/app/models/channel/email_parser.rb:118:in `block in process' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:93:in `block in timeout' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `block in catch' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:33:in `catch' /usr/local/rvm/rubies/ruby-2.6.5/lib/ruby/2.6.0/timeout.rb:108:in `timeout' /opt/zammad/app/models/channel/email_parser.rb:117:in `process' /opt/zammad/app/models/channel/email_parser.rb:482:in `block in process_unprocessable_mails' /opt/zammad/app/models/channel/email_parser.rb:481:in `glob' /opt/zammad/app/models/channel/email_parser.rb:481:in `process_unprocessable_mails' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `<main>' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `eval' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands/runner/runner_command.rb:41:in `perform' /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/command.rb:27:in `run' /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command' /usr/local/rvm/gems/ruby-2.6.5/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command/base.rb:69:in `perform' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/command.rb:46:in `invoke' /usr/local/rvm/gems/ruby-2.6.5/gems/railties-5.2.4.3/lib/rails/commands.rb:18:in `<main>' /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `require' /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:21:in `block in require_with_bootsnap_lfi' /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/loaded_features_index.rb:65:in `register' /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:20:in `require_with_bootsnap_lfi' /usr/local/rvm/gems/ruby-2.6.5/gems/bootsnap-1.3.2/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:29:in `require' /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `block in require' /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:257:in `load_dependency' /usr/local/rvm/gems/ruby-2.6.5/gems/activesupport-5.2.4.3/lib/active_support/dependencies.rb:291:in `require' bin/rails:9:in `<main>' ``` ### Steps to reproduce the behavior: * have a specific byte sequence of a ISO-2022-JP encoded mail * try to import it Yes I'm sure this is a bug and no feature request or a general question.
process
zammad can t import specific iso jp mails hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version installation method source package any operating system any database version any elasticsearch version any browser version any ticket id expected behavior zammad imports iso jp without issues actual behavior in some special cases zammad can t import mails encoded with iso jp you can find samples in above mentioned ticket ids anonymized mails usually no longer contain the issue as the byte order is changed in that situations zammad logs the following error can t process email you will find it for bug reporting under opt zammad tmp unprocessable mail eml please create an issue at error traceback most recent call last from bin rails in from usr local rvm gems ruby gems activesupport lib active support dependencies rb in require from usr local rvm gems ruby gems activesupport lib active support dependencies rb in load dependency from usr local rvm gems ruby gems activesupport lib active support dependencies rb in block in require from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems railties lib rails commands rb in from usr local rvm gems ruby gems railties lib rails command rb in invoke from usr local rvm gems ruby gems railties lib rails command base rb in perform from usr local rvm gems ruby gems thor lib thor rb in dispatch from usr local rvm gems ruby gems thor lib thor invocation rb in invoke command from usr local rvm gems ruby gems thor lib thor command rb in run from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in from opt zammad app models channel email parser rb in process unprocessable mails from opt zammad app models channel email parser rb in glob from opt zammad app models channel email parser rb in block in process unprocessable mails from opt zammad app models channel email parser rb in process from usr local rvm rubies ruby lib ruby timeout rb in timeout from usr local rvm rubies ruby lib ruby timeout rb in catch from usr local rvm rubies ruby lib ruby timeout rb in catch from usr local rvm rubies ruby lib ruby timeout rb in block in catch from usr local rvm rubies ruby lib ruby timeout rb in block in timeout from opt zammad app models channel email parser rb in block in process from opt zammad app models channel email parser rb in process from opt zammad app models channel email parser rb in parse from opt zammad app models channel email parser rb in force parts encoding if needed from usr local rvm rubies ruby lib ruby delegate rb in block in delegating block from usr local rvm rubies ruby lib ruby delegate rb in each from opt zammad app models channel email parser rb in block in force parts encoding if needed from opt zammad app models channel email parser rb in force single part encoding if needed opt zammad app models channel email parser rb in encode followed by on iso jp encoding invalidbytesequenceerror from bin rails in from usr local rvm gems ruby gems activesupport lib active support dependencies rb in require from usr local rvm gems ruby gems activesupport lib active support dependencies rb in load dependency from usr local rvm gems ruby gems activesupport lib active support dependencies rb in block in require from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems railties lib rails commands rb in from usr local rvm gems ruby gems railties lib rails command rb in invoke from usr local rvm gems ruby gems railties lib rails command base rb in perform from usr local rvm gems ruby gems thor lib thor rb in dispatch from usr local rvm gems ruby gems thor lib thor invocation rb in invoke command from usr local rvm gems ruby gems thor lib thor command rb in run from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in from opt zammad app models channel email parser rb in process unprocessable mails from opt zammad app models channel email parser rb in glob from opt zammad app models channel email parser rb in block in process unprocessable mails from opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in rescue in process runtimeerror opt zammad app models channel email parser rb in encode opt zammad app models channel email parser rb in force single part encoding if needed opt zammad app models channel email parser rb in block in force parts encoding if needed usr local rvm rubies ruby lib ruby delegate rb in each usr local rvm rubies ruby lib ruby delegate rb in block in delegating block opt zammad app models channel email parser rb in force parts encoding if needed opt zammad app models channel email parser rb in parse opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in block in process usr local rvm rubies ruby lib ruby timeout rb in block in timeout usr local rvm rubies ruby lib ruby timeout rb in block in catch usr local rvm rubies ruby lib ruby timeout rb in catch usr local rvm rubies ruby lib ruby timeout rb in catch usr local rvm rubies ruby lib ruby timeout rb in timeout opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in block in process unprocessable mails opt zammad app models channel email parser rb in glob opt zammad app models channel email parser rb in process unprocessable mails usr local rvm gems ruby gems railties lib rails commands runner runner command rb in usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform usr local rvm gems ruby gems thor lib thor command rb in run usr local rvm gems ruby gems thor lib thor invocation rb in invoke command usr local rvm gems ruby gems thor lib thor rb in dispatch usr local rvm gems ruby gems railties lib rails command base rb in perform usr local rvm gems ruby gems railties lib rails command rb in invoke usr local rvm gems ruby gems railties lib rails commands rb in usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi usr local rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require usr local rvm gems ruby gems activesupport lib active support dependencies rb in block in require usr local rvm gems ruby gems activesupport lib active support dependencies rb in load dependency usr local rvm gems ruby gems activesupport lib active support dependencies rb in require bin rails in steps to reproduce the behavior have a specific byte sequence of a iso jp encoded mail try to import it yes i m sure this is a bug and no feature request or a general question
1
4,201
7,164,074,106
IssuesEvent
2018-01-29 09:58:24
DynareTeam/dynare
https://api.github.com/repos/DynareTeam/dynare
closed
nested @#ifndef and @#ifdef don't work
bug macroprocessor
Email from @JohannesPfeifer: If I use ``` @#define risk_sharing = 0 @#if risk_sharing == 0 @#ifndef endogenous_discount_factor @#define endogenous_discount_factor = 1 @#endif @#endif ``` In a mod-file, I get an error ``` @#if/@#ifdef/@#ifndef not matched by an @#endif or file does not end with a new line (unexpected end of file) ```
1.0
nested @#ifndef and @#ifdef don't work - Email from @JohannesPfeifer: If I use ``` @#define risk_sharing = 0 @#if risk_sharing == 0 @#ifndef endogenous_discount_factor @#define endogenous_discount_factor = 1 @#endif @#endif ``` In a mod-file, I get an error ``` @#if/@#ifdef/@#ifndef not matched by an @#endif or file does not end with a new line (unexpected end of file) ```
process
nested ifndef and ifdef don t work email from johannespfeifer if i use define risk sharing if risk sharing ifndef endogenous discount factor define endogenous discount factor endif endif in a mod file i get an error if ifdef ifndef not matched by an endif or file does not end with a new line unexpected end of file
1
121,296
12,122,021,249
IssuesEvent
2020-04-22 10:14:42
dry-python/returns
https://api.github.com/repos/dry-python/returns
opened
Create "Maintaining.md"
documentation
In this document I will explain: - How releases are made - How decisions are made and who has the final word - What processes we enforce (for example: RFC for our core values #346)
1.0
Create "Maintaining.md" - In this document I will explain: - How releases are made - How decisions are made and who has the final word - What processes we enforce (for example: RFC for our core values #346)
non_process
create maintaining md in this document i will explain how releases are made how decisions are made and who has the final word what processes we enforce for example rfc for our core values
0
14,018
16,816,924,519
IssuesEvent
2021-06-17 08:29:59
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Participant details page > Table columns alignment issue
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
Participant details page > Enrollment history and consent history table columns should be aligned properly (As per invision screen) ![ss4](https://user-images.githubusercontent.com/71445210/111454145-5478fc00-873a-11eb-9718-210d30fe3e8a.png)
3.0
[PM] Participant details page > Table columns alignment issue - Participant details page > Enrollment history and consent history table columns should be aligned properly (As per invision screen) ![ss4](https://user-images.githubusercontent.com/71445210/111454145-5478fc00-873a-11eb-9718-210d30fe3e8a.png)
process
participant details page table columns alignment issue participant details page enrollment history and consent history table columns should be aligned properly as per invision screen
1
19,189
25,314,861,210
IssuesEvent
2022-11-17 20:39:13
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Cypress 5.x does not support UMD module type in tsconfig.json
stage: ready for work topic: typescript topic: preprocessors :wrench:
### Current behavior: When I upgrade from v4.12.1 to v5.x I get an error like this: ``` The following error originated from your test code, not from Cypress. > Cannot find module '../../path/to/something/that/used/to/work' When Cypress detects uncaught errors originating from your test code it will automatically fail the current test. Cypress could not associate this error to any specific test. We dynamically generated a new test to display this failure. Check your console for the stack trace or click this message to see where it originated from. src/integration/nameOfSpec%20sync:2:1 1 | function webpackEmptyContext(req) { > 2 | var e = new Error("Cannot find module '" + req + "'"); | ^ 3 | e.code = 'MODULE_NOT_FOUND'; 4 | throw e; 5 | } ``` ### Workaround Change tsconfig.json from `"module": "umd"` to `"module": "commonjs"`. Our project had this set to `umd` for the past 2 years for whatever reason, made sense back at the time to have it that way. Thankfully, setting it to `commonjs` does not appear to break anything with our tests so the workaround is acceptable for us. I did not find anything in the Cypress docs when googling about "cypress tsconfig module umd" so it seems like this is undocumented. ### Desired behavior: It would work without throwing an error about "Cannot find module". Or: It should be documented/warned that module type must NOT be `umd` (required to be `commonjs`? I have not tried other options other than `commonjs` and `umd`) - possibly could be documented [here](https://docs.cypress.io/guides/tooling/typescript-support.html#Configure-tsconfig-json) ### Test code to reproduce In project's `tsconfig.json` set `"module": "umd"` and you will receive an error during test execution. Repro steps: * clone https://github.com/cypress-io/cypress-and-jest-typescript-example * Bump cypress version to 5.x * Set tsconfig.json to have `"module": "umd"` * Run `npm run cy:open` and run the spec * error is shown: ``` An uncaught error was detected outside of a testfailed No commands were issued in this test. Error The following error originated from your test code, not from Cypress. > Cannot find module '../../src/foo' When Cypress detects uncaught errors originating from your test code it will automatically fail the current test. Cypress could not associate this error to any specific test. We dynamically generated a new test to display this failure. Check your console for the stack trace or click this message to see where it originated from. cypress/integration%20sync:2:1 1 | function webpackEmptyContext(req) { > 2 | var e = new Error("Cannot find module '" + req + "'"); | ^ 3 | e.code = 'MODULE_NOT_FOUND'; 4 | throw e; 5 | } ``` The workaround is just not to use the `umd` module type, but it would be good if cypress could either warn against this during runtime or at least here: https://docs.cypress.io/guides/tooling/typescript-support.html#Configure-tsconfig-json And maybe someone has a legit use case for using `umd` * but I don't know what that would be. ### Versions Cypress v5.0.0, 5.1.0, 5.2.0
1.0
Cypress 5.x does not support UMD module type in tsconfig.json - ### Current behavior: When I upgrade from v4.12.1 to v5.x I get an error like this: ``` The following error originated from your test code, not from Cypress. > Cannot find module '../../path/to/something/that/used/to/work' When Cypress detects uncaught errors originating from your test code it will automatically fail the current test. Cypress could not associate this error to any specific test. We dynamically generated a new test to display this failure. Check your console for the stack trace or click this message to see where it originated from. src/integration/nameOfSpec%20sync:2:1 1 | function webpackEmptyContext(req) { > 2 | var e = new Error("Cannot find module '" + req + "'"); | ^ 3 | e.code = 'MODULE_NOT_FOUND'; 4 | throw e; 5 | } ``` ### Workaround Change tsconfig.json from `"module": "umd"` to `"module": "commonjs"`. Our project had this set to `umd` for the past 2 years for whatever reason, made sense back at the time to have it that way. Thankfully, setting it to `commonjs` does not appear to break anything with our tests so the workaround is acceptable for us. I did not find anything in the Cypress docs when googling about "cypress tsconfig module umd" so it seems like this is undocumented. ### Desired behavior: It would work without throwing an error about "Cannot find module". Or: It should be documented/warned that module type must NOT be `umd` (required to be `commonjs`? I have not tried other options other than `commonjs` and `umd`) - possibly could be documented [here](https://docs.cypress.io/guides/tooling/typescript-support.html#Configure-tsconfig-json) ### Test code to reproduce In project's `tsconfig.json` set `"module": "umd"` and you will receive an error during test execution. Repro steps: * clone https://github.com/cypress-io/cypress-and-jest-typescript-example * Bump cypress version to 5.x * Set tsconfig.json to have `"module": "umd"` * Run `npm run cy:open` and run the spec * error is shown: ``` An uncaught error was detected outside of a testfailed No commands were issued in this test. Error The following error originated from your test code, not from Cypress. > Cannot find module '../../src/foo' When Cypress detects uncaught errors originating from your test code it will automatically fail the current test. Cypress could not associate this error to any specific test. We dynamically generated a new test to display this failure. Check your console for the stack trace or click this message to see where it originated from. cypress/integration%20sync:2:1 1 | function webpackEmptyContext(req) { > 2 | var e = new Error("Cannot find module '" + req + "'"); | ^ 3 | e.code = 'MODULE_NOT_FOUND'; 4 | throw e; 5 | } ``` The workaround is just not to use the `umd` module type, but it would be good if cypress could either warn against this during runtime or at least here: https://docs.cypress.io/guides/tooling/typescript-support.html#Configure-tsconfig-json And maybe someone has a legit use case for using `umd` * but I don't know what that would be. ### Versions Cypress v5.0.0, 5.1.0, 5.2.0
process
cypress x does not support umd module type in tsconfig json current behavior when i upgrade from to x i get an error like this the following error originated from your test code not from cypress cannot find module path to something that used to work when cypress detects uncaught errors originating from your test code it will automatically fail the current test cypress could not associate this error to any specific test we dynamically generated a new test to display this failure check your console for the stack trace or click this message to see where it originated from src integration nameofspec function webpackemptycontext req var e new error cannot find module req e code module not found throw e workaround change tsconfig json from module umd to module commonjs our project had this set to umd for the past years for whatever reason made sense back at the time to have it that way thankfully setting it to commonjs does not appear to break anything with our tests so the workaround is acceptable for us i did not find anything in the cypress docs when googling about cypress tsconfig module umd so it seems like this is undocumented desired behavior it would work without throwing an error about cannot find module or it should be documented warned that module type must not be umd required to be commonjs i have not tried other options other than commonjs and umd possibly could be documented test code to reproduce in project s tsconfig json set module umd and you will receive an error during test execution repro steps clone bump cypress version to x set tsconfig json to have module umd run npm run cy open and run the spec error is shown an uncaught error was detected outside of a testfailed no commands were issued in this test error the following error originated from your test code not from cypress cannot find module src foo when cypress detects uncaught errors originating from your test code it will automatically fail the current test cypress could not associate this error to any specific test we dynamically generated a new test to display this failure check your console for the stack trace or click this message to see where it originated from cypress integration function webpackemptycontext req var e new error cannot find module req e code module not found throw e the workaround is just not to use the umd module type but it would be good if cypress could either warn against this during runtime or at least here and maybe someone has a legit use case for using umd but i don t know what that would be versions cypress
1
19,949
26,421,914,350
IssuesEvent
2023-01-13 21:28:37
rusefi/rusefi_documentation
https://api.github.com/repos/rusefi/rusefi_documentation
closed
embedded HTML code creates issues for MkDocs and results in wrong display on wiki.rusefi.com
wiki location & process change
## example: ![2022-12-27 17_51_39-List of FAQ and HOWTO pages - rusEFI Wiki - Vivaldi](https://user-images.githubusercontent.com/22799428/209699733-f08a0200-f095-42f5-8720-3faa7ec48b3a.png) ## root cause: ![2022-12-27 17_59_14-Pages-FAQ-and-HOWTO md - rusefi_documentation - Visual Studio Code](https://user-images.githubusercontent.com/22799428/209699767-ac76cc54-2e20-4210-bd94-e1576ee9f047.png) ## fix: **use only markdown tags and remove embedded HTML code** ![image](https://user-images.githubusercontent.com/22799428/209699883-6e89c55b-567c-42d5-958b-14edf106363a.png) ![image](https://user-images.githubusercontent.com/22799428/209700292-b09d4524-d325-4412-8654-02c1d6f19ae0.png) ## affected markdown files 117 results - 8 files ``` FAQ-Ignition.md: 42 43: <details><summary><u>I want to buy new coils</u></summary> 44 Pages-Fuel.md: 2 3: <details><summary><u>rusEFI Project</u></summary> 4 Pages-HOWTO.md: 2 3: <details><summary><u>HOW TO</u></summary> 4 ... Pages-Ignition.md: 4 5: <details><summary><u>rusEFI Project</u></summary> 6 ... Pages-Sensors-and-Actuators.md: 2 3: <details><summary><u>Throttle and ETB</u></summary> 4 ... Pages-Software.md: 2 3: <details><summary><u>rusEFI Project</u></summary> 4 ... Vault-Of-Terminology.md: 4 5: <details><summary><u>AAP</u></summary> 6 ... ```
1.0
embedded HTML code creates issues for MkDocs and results in wrong display on wiki.rusefi.com - ## example: ![2022-12-27 17_51_39-List of FAQ and HOWTO pages - rusEFI Wiki - Vivaldi](https://user-images.githubusercontent.com/22799428/209699733-f08a0200-f095-42f5-8720-3faa7ec48b3a.png) ## root cause: ![2022-12-27 17_59_14-Pages-FAQ-and-HOWTO md - rusefi_documentation - Visual Studio Code](https://user-images.githubusercontent.com/22799428/209699767-ac76cc54-2e20-4210-bd94-e1576ee9f047.png) ## fix: **use only markdown tags and remove embedded HTML code** ![image](https://user-images.githubusercontent.com/22799428/209699883-6e89c55b-567c-42d5-958b-14edf106363a.png) ![image](https://user-images.githubusercontent.com/22799428/209700292-b09d4524-d325-4412-8654-02c1d6f19ae0.png) ## affected markdown files 117 results - 8 files ``` FAQ-Ignition.md: 42 43: <details><summary><u>I want to buy new coils</u></summary> 44 Pages-Fuel.md: 2 3: <details><summary><u>rusEFI Project</u></summary> 4 Pages-HOWTO.md: 2 3: <details><summary><u>HOW TO</u></summary> 4 ... Pages-Ignition.md: 4 5: <details><summary><u>rusEFI Project</u></summary> 6 ... Pages-Sensors-and-Actuators.md: 2 3: <details><summary><u>Throttle and ETB</u></summary> 4 ... Pages-Software.md: 2 3: <details><summary><u>rusEFI Project</u></summary> 4 ... Vault-Of-Terminology.md: 4 5: <details><summary><u>AAP</u></summary> 6 ... ```
process
embedded html code creates issues for mkdocs and results in wrong display on wiki rusefi com example root cause fix use only markdown tags and remove embedded html code affected markdown files results files faq ignition md i want to buy new coils pages fuel md rusefi project pages howto md how to pages ignition md rusefi project pages sensors and actuators md throttle and etb pages software md rusefi project vault of terminology md aap
1
3,199
6,262,258,847
IssuesEvent
2017-07-15 08:38:51
nodejs/node
https://api.github.com/repos/nodejs/node
closed
`exec` and `execSync` `git clone private_repo` + timeout hangs the REPL
child_process doc good first contribution repl
- **Version**: v6.7 (also v5.12) - **Platform**: Linux #138-Ubuntu SMP Fri Jun 24 17:00:34 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux **Expected behavior**: REPL resumes working **Actual behavior**: Process hangs Description: When git cloning a repo that requires entering a username and password in `child_process.exec` or `child_process.execSync` and hitting a `timeout` then the program and shell will hang. I've tried this on multiple machines. Here is repro code that uses a private repo of mine (notice how it asks for the username and then the program and shell hangs): ``` js > child_process.execSync('git clone https://github.com/amasad/repl-it-web', { timeout: 1000, stdio: ['ignore']}) Username for 'https://github.com': Error: spawnSync /bin/sh ETIMEDOUT at exports._errnoException (util.js:893:11) at spawnSync (child_process.js:448:20) at Object.execSync (child_process.js:504:13) at repl:1:15 at REPLServer.defaultEval (repl.js:270:27) at bound (domain.js:287:14) at REPLServer.runBound [as eval] (domain.js:300:12) at REPLServer.<anonymous> (repl.js:439:10) at emitOne (events.js:95:20) at REPLServer.emit (events.js:182:7) ``` Note that if I `pkill node` from another shell then the shell starts working again.
1.0
`exec` and `execSync` `git clone private_repo` + timeout hangs the REPL - - **Version**: v6.7 (also v5.12) - **Platform**: Linux #138-Ubuntu SMP Fri Jun 24 17:00:34 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux **Expected behavior**: REPL resumes working **Actual behavior**: Process hangs Description: When git cloning a repo that requires entering a username and password in `child_process.exec` or `child_process.execSync` and hitting a `timeout` then the program and shell will hang. I've tried this on multiple machines. Here is repro code that uses a private repo of mine (notice how it asks for the username and then the program and shell hangs): ``` js > child_process.execSync('git clone https://github.com/amasad/repl-it-web', { timeout: 1000, stdio: ['ignore']}) Username for 'https://github.com': Error: spawnSync /bin/sh ETIMEDOUT at exports._errnoException (util.js:893:11) at spawnSync (child_process.js:448:20) at Object.execSync (child_process.js:504:13) at repl:1:15 at REPLServer.defaultEval (repl.js:270:27) at bound (domain.js:287:14) at REPLServer.runBound [as eval] (domain.js:300:12) at REPLServer.<anonymous> (repl.js:439:10) at emitOne (events.js:95:20) at REPLServer.emit (events.js:182:7) ``` Note that if I `pkill node` from another shell then the shell starts working again.
process
exec and execsync git clone private repo timeout hangs the repl version also platform linux ubuntu smp fri jun utc gnu linux expected behavior repl resumes working actual behavior process hangs description when git cloning a repo that requires entering a username and password in child process exec or child process execsync and hitting a timeout then the program and shell will hang i ve tried this on multiple machines here is repro code that uses a private repo of mine notice how it asks for the username and then the program and shell hangs js child process execsync git clone timeout stdio username for error spawnsync bin sh etimedout at exports errnoexception util js at spawnsync child process js at object execsync child process js at repl at replserver defaulteval repl js at bound domain js at replserver runbound domain js at replserver repl js at emitone events js at replserver emit events js note that if i pkill node from another shell then the shell starts working again
1
21,485
29,577,946,972
IssuesEvent
2023-06-07 01:37:03
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Strip version does not works properly even with simple Hello world program on java.
P4 type: support / not a bug (process) team-ExternalDeps stale
### Description of the problem / feature request: Require heavy bundle of bazel which does have required tooling. Strip version of bazel is failing in most of the cases. java.io.IOException: Error downloading [https://mirror.bazel.build/bazel_java_tools/releases/javac11/v4.0/java_tools_javac11_linux-v4.0.zip] ### Feature requests: what underlying problem are you trying to solve with this feature? any java target cause above issue ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. ``` harora37@harora37:~/dummy$ cat BUILD java_binary( name = "Hello", main_class = "com.ibm.amg.services.registration.Hello", srcs = ["Hello.java"], ) harora37@harora37:~/dummy$ cat Hello.java package com.ibm.amg.services.registration; public class Hello { public static void main(String...str) { System.out.println("Hello Word"); } } ``` WARNING: Download from https://mirror.bazel.build/bazel_java_tools/releases/javac11/v4.0/java_tools_javac11_linux-v4.0.zip failed: class javax.net.ssl.SSLProtocolException Connection reset ERROR: An error occurred during the fetch of repository 'remote_java_tools_linux': java.io.IOException: Error downloading [https://mirror.bazel.build/bazel_java_tools/releases/javac11/v4.0/java_tools_javac11_linux-v4.0.zip] to /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/external/remote_java_tools_linux/java_tools_javac11_linux-v4.0.zip: Connection reset ERROR: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/external/bazel_tools/tools/jdk/BUILD:224:1: @bazel_tools//tools/jdk:JacocoCoverageRunner depends on @remote_java_tools_linux//:java_tools/JacocoCoverage_jarjar_deploy.jar in repository @remote_java_tools_linux which failed to fetch. no such package '@remote_java_tools_linux//': java.io.IOException: Error downloading [https://mirror.bazel.build/bazel_java_tools/releases/javac11/v4.0/java_tools_javac11_linux-v4.0.zip] to /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/external/remote_java_tools_linux/java_tools_javac11_linux-v4.0.zip: Connection reset ERROR: Analysis of target '//:Hello' failed; build aborted: no such package '@remote_java_tools_linux//': java.io.IOException: Error downloading [https://mirror.bazel.build/bazel_java_tools/releases/javac11/v4.0/java_tools_javac11_linux-v4.0.zip] to /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/external/remote_java_tools_linux/java_tools_javac11_linux-v4.0.zip: Connection reset INFO: Elapsed time: 14.269s INFO: 0 processes. FAILED: Build did NOT complete successfully (15 packages loaded, 282 targets configured) FAILED: Build did NOT complete successfully (15 packages loaded, 282 targets configured) harora37@harora37:~/dummy$ ls BUILD Hello.java WORKSPACE harora37@harora37:~/dummy$ bazel info release release 0.29.0 ### What operating system are you running Bazel on? Ubuntu 16.04 ### What's the output of `bazel info release`? ``` $:~/dummy$ bazel info bazel-bin: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/execroot/__main__/bazel-out/k8-fastbuild/bin bazel-genfiles: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/execroot/__main__/bazel-out/k8-fastbuild/bin bazel-testlogs: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/execroot/__main__/bazel-out/k8-fastbuild/testlogs character-encoding: file.encoding = ISO-8859-1, defaultCharset = ISO-8859-1 command_log: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/command.log committed-heap-size: 98MB execution_root: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/execroot/__main__ gc-count: 7 gc-time: 178ms install_base: /home/harora37/.cache/bazel/_bazel_harora37/install/7390d4f46c916c7cd973ba0dce27ab33 java-home: /home/harora37/.cache/bazel/_bazel_harora37/install/7390d4f46c916c7cd973ba0dce27ab33/_embedded_binaries/embedded_tools/jdk java-runtime: OpenJDK Runtime Environment (build 11.0.2+7-LTS) by Azul Systems, Inc. java-vm: OpenJDK 64-Bit Server VM (build 11.0.2+7-LTS, mixed mode) by Azul Systems, Inc. max-heap-size: 8415MB output_base: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a output_path: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/execroot/__main__/bazel-out package_path: %workspace% release: release 0.29.0 repository_cache: /home/harora37/.cache/bazel/_bazel_harora37/cache/repos/v1 server_log: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/java.log.harora37.harora37.log.java.20190829-151108.18109 server_pid: 18109 used-heap-size: 44MB workspace: /home/harora37/dummy ``` ### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel. ``` release: release 0.29.0 ``` ### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ? this is private repo of IBM ### Have you found anything relevant by searching the web? https://github.com/bazelbuild/rules_scala/issues/798#issuecomment-515017438 but not helping ### Any other information, logs, or outputs that you want to share? No FYI @johnynek @nlopezgi @cushon @katre @iirina
1.0
Strip version does not works properly even with simple Hello world program on java. - ### Description of the problem / feature request: Require heavy bundle of bazel which does have required tooling. Strip version of bazel is failing in most of the cases. java.io.IOException: Error downloading [https://mirror.bazel.build/bazel_java_tools/releases/javac11/v4.0/java_tools_javac11_linux-v4.0.zip] ### Feature requests: what underlying problem are you trying to solve with this feature? any java target cause above issue ### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. ``` harora37@harora37:~/dummy$ cat BUILD java_binary( name = "Hello", main_class = "com.ibm.amg.services.registration.Hello", srcs = ["Hello.java"], ) harora37@harora37:~/dummy$ cat Hello.java package com.ibm.amg.services.registration; public class Hello { public static void main(String...str) { System.out.println("Hello Word"); } } ``` WARNING: Download from https://mirror.bazel.build/bazel_java_tools/releases/javac11/v4.0/java_tools_javac11_linux-v4.0.zip failed: class javax.net.ssl.SSLProtocolException Connection reset ERROR: An error occurred during the fetch of repository 'remote_java_tools_linux': java.io.IOException: Error downloading [https://mirror.bazel.build/bazel_java_tools/releases/javac11/v4.0/java_tools_javac11_linux-v4.0.zip] to /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/external/remote_java_tools_linux/java_tools_javac11_linux-v4.0.zip: Connection reset ERROR: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/external/bazel_tools/tools/jdk/BUILD:224:1: @bazel_tools//tools/jdk:JacocoCoverageRunner depends on @remote_java_tools_linux//:java_tools/JacocoCoverage_jarjar_deploy.jar in repository @remote_java_tools_linux which failed to fetch. no such package '@remote_java_tools_linux//': java.io.IOException: Error downloading [https://mirror.bazel.build/bazel_java_tools/releases/javac11/v4.0/java_tools_javac11_linux-v4.0.zip] to /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/external/remote_java_tools_linux/java_tools_javac11_linux-v4.0.zip: Connection reset ERROR: Analysis of target '//:Hello' failed; build aborted: no such package '@remote_java_tools_linux//': java.io.IOException: Error downloading [https://mirror.bazel.build/bazel_java_tools/releases/javac11/v4.0/java_tools_javac11_linux-v4.0.zip] to /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/external/remote_java_tools_linux/java_tools_javac11_linux-v4.0.zip: Connection reset INFO: Elapsed time: 14.269s INFO: 0 processes. FAILED: Build did NOT complete successfully (15 packages loaded, 282 targets configured) FAILED: Build did NOT complete successfully (15 packages loaded, 282 targets configured) harora37@harora37:~/dummy$ ls BUILD Hello.java WORKSPACE harora37@harora37:~/dummy$ bazel info release release 0.29.0 ### What operating system are you running Bazel on? Ubuntu 16.04 ### What's the output of `bazel info release`? ``` $:~/dummy$ bazel info bazel-bin: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/execroot/__main__/bazel-out/k8-fastbuild/bin bazel-genfiles: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/execroot/__main__/bazel-out/k8-fastbuild/bin bazel-testlogs: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/execroot/__main__/bazel-out/k8-fastbuild/testlogs character-encoding: file.encoding = ISO-8859-1, defaultCharset = ISO-8859-1 command_log: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/command.log committed-heap-size: 98MB execution_root: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/execroot/__main__ gc-count: 7 gc-time: 178ms install_base: /home/harora37/.cache/bazel/_bazel_harora37/install/7390d4f46c916c7cd973ba0dce27ab33 java-home: /home/harora37/.cache/bazel/_bazel_harora37/install/7390d4f46c916c7cd973ba0dce27ab33/_embedded_binaries/embedded_tools/jdk java-runtime: OpenJDK Runtime Environment (build 11.0.2+7-LTS) by Azul Systems, Inc. java-vm: OpenJDK 64-Bit Server VM (build 11.0.2+7-LTS, mixed mode) by Azul Systems, Inc. max-heap-size: 8415MB output_base: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a output_path: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/execroot/__main__/bazel-out package_path: %workspace% release: release 0.29.0 repository_cache: /home/harora37/.cache/bazel/_bazel_harora37/cache/repos/v1 server_log: /home/harora37/.cache/bazel/_bazel_harora37/bbea7929ac98478df803126c2ffe502a/java.log.harora37.harora37.log.java.20190829-151108.18109 server_pid: 18109 used-heap-size: 44MB workspace: /home/harora37/dummy ``` ### If `bazel info release` returns "development version" or "(@non-git)", tell us how you built Bazel. ``` release: release 0.29.0 ``` ### What's the output of `git remote get-url origin ; git rev-parse master ; git rev-parse HEAD` ? this is private repo of IBM ### Have you found anything relevant by searching the web? https://github.com/bazelbuild/rules_scala/issues/798#issuecomment-515017438 but not helping ### Any other information, logs, or outputs that you want to share? No FYI @johnynek @nlopezgi @cushon @katre @iirina
process
strip version does not works properly even with simple hello world program on java description of the problem feature request require heavy bundle of bazel which does have required tooling strip version of bazel is failing in most of the cases java io ioexception error downloading feature requests what underlying problem are you trying to solve with this feature any java target cause above issue bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible dummy cat build java binary name hello main class com ibm amg services registration hello srcs dummy cat hello java package com ibm amg services registration public class hello public static void main string str system out println hello word warning download from failed class javax net ssl sslprotocolexception connection reset error an error occurred during the fetch of repository remote java tools linux java io ioexception error downloading to home cache bazel bazel external remote java tools linux java tools linux zip connection reset error home cache bazel bazel external bazel tools tools jdk build bazel tools tools jdk jacococoveragerunner depends on remote java tools linux java tools jacococoverage jarjar deploy jar in repository remote java tools linux which failed to fetch no such package remote java tools linux java io ioexception error downloading to home cache bazel bazel external remote java tools linux java tools linux zip connection reset error analysis of target hello failed build aborted no such package remote java tools linux java io ioexception error downloading to home cache bazel bazel external remote java tools linux java tools linux zip connection reset info elapsed time info processes failed build did not complete successfully packages loaded targets configured failed build did not complete successfully packages loaded targets configured dummy ls build hello java workspace dummy bazel info release release what operating system are you running bazel on ubuntu what s the output of bazel info release dummy bazel info bazel bin home cache bazel bazel execroot main bazel out fastbuild bin bazel genfiles home cache bazel bazel execroot main bazel out fastbuild bin bazel testlogs home cache bazel bazel execroot main bazel out fastbuild testlogs character encoding file encoding iso defaultcharset iso command log home cache bazel bazel command log committed heap size execution root home cache bazel bazel execroot main gc count gc time install base home cache bazel bazel install java home home cache bazel bazel install embedded binaries embedded tools jdk java runtime openjdk runtime environment build lts by azul systems inc java vm openjdk bit server vm build lts mixed mode by azul systems inc max heap size output base home cache bazel bazel output path home cache bazel bazel execroot main bazel out package path workspace release release repository cache home cache bazel bazel cache repos server log home cache bazel bazel java log log java server pid used heap size workspace home dummy if bazel info release returns development version or non git tell us how you built bazel release release what s the output of git remote get url origin git rev parse master git rev parse head this is private repo of ibm have you found anything relevant by searching the web but not helping any other information logs or outputs that you want to share no fyi johnynek nlopezgi cushon katre iirina
1
84,797
15,728,293,847
IssuesEvent
2021-03-29 13:40:01
ssobue/kotlin-boot
https://api.github.com/repos/ssobue/kotlin-boot
closed
CVE-2018-19361 (High) detected in jackson-databind-2.8.10.jar
security vulnerability
## CVE-2018-19361 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.10.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /kotlin-boot/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.10/jackson-databind-2.8.10.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-1.5.9.RELEASE.jar (Root Library) - :x: **jackson-databind-2.8.10.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.8 might allow attackers to have unspecified impact by leveraging failure to block the openjpa class from polymorphic deserialization. <p>Publish Date: 2019-01-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19361>CVE-2018-19361</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19361">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19361</a></p> <p>Release Date: 2019-01-02</p> <p>Fix Resolution: 2.9.8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-19361 (High) detected in jackson-databind-2.8.10.jar - ## CVE-2018-19361 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.10.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /kotlin-boot/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.10/jackson-databind-2.8.10.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-1.5.9.RELEASE.jar (Root Library) - :x: **jackson-databind-2.8.10.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.8 might allow attackers to have unspecified impact by leveraging failure to block the openjpa class from polymorphic deserialization. <p>Publish Date: 2019-01-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19361>CVE-2018-19361</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19361">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19361</a></p> <p>Release Date: 2019-01-02</p> <p>Fix Resolution: 2.9.8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file kotlin boot pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before might allow attackers to have unspecified impact by leveraging failure to block the openjpa class from polymorphic deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
187,597
15,101,421,980
IssuesEvent
2021-02-08 07:28:00
Kostyachzhan/ShedulerBook
https://api.github.com/repos/Kostyachzhan/ShedulerBook
closed
UseCase diagram
documentation
Розробити схему бази даних з таблицями у вигляді відповідної ER діаграми. Для кожної таблиці вказати дані, які там будуть зберігатися, типи, а також залежності між таблицями.
1.0
UseCase diagram - Розробити схему бази даних з таблицями у вигляді відповідної ER діаграми. Для кожної таблиці вказати дані, які там будуть зберігатися, типи, а також залежності між таблицями.
non_process
usecase diagram розробити схему бази даних з таблицями у вигляді відповідної er діаграми для кожної таблиці вказати дані які там будуть зберігатися типи а також залежності між таблицями
0
17,546
23,357,169,869
IssuesEvent
2022-08-10 08:29:09
googleapis/java-spanner
https://api.github.com/repos/googleapis/java-spanner
opened
Encrypted test instance is not always deleted after test run
type: cleanup type: process api: spanner
[This line](https://github.com/googleapis/java-spanner/blob/787ccadcba01193d541bfd1b80b055fb5d4c2bb3/samples/snippets/src/test/java/com/example/spanner/SpannerSampleIT.java#L458) will not be executed if the test is killed, for example if the test times out. That will leave the encrypted test instance lingering around. We should therefore add a cleanup task at the start of the test run to remove any old instances still hanging around.
1.0
Encrypted test instance is not always deleted after test run - [This line](https://github.com/googleapis/java-spanner/blob/787ccadcba01193d541bfd1b80b055fb5d4c2bb3/samples/snippets/src/test/java/com/example/spanner/SpannerSampleIT.java#L458) will not be executed if the test is killed, for example if the test times out. That will leave the encrypted test instance lingering around. We should therefore add a cleanup task at the start of the test run to remove any old instances still hanging around.
process
encrypted test instance is not always deleted after test run will not be executed if the test is killed for example if the test times out that will leave the encrypted test instance lingering around we should therefore add a cleanup task at the start of the test run to remove any old instances still hanging around
1
271,868
23,637,626,351
IssuesEvent
2022-08-25 14:27:07
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
ccl/changefeedccl: TestChangefeedPrimaryKeyChangeWorksWithMultipleTables failed
C-test-failure O-robot branch-master T-cdc
ccl/changefeedccl.TestChangefeedPrimaryKeyChangeWorksWithMultipleTables [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/6213858?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/6213858?buildTab=artifacts#/) on master @ [003c0360de8b64319b5f0f127b99be91dbdca8a3](https://github.com/cockroachdb/cockroach/commits/003c0360de8b64319b5f0f127b99be91dbdca8a3): ``` === RUN TestChangefeedPrimaryKeyChangeWorksWithMultipleTables test_log_scope.go:162: test logs captured to: /artifacts/tmp/_tmp/a77002d7c9453d7cd2d382f907780e13/logTestChangefeedPrimaryKeyChangeWorksWithMultipleTables3076719986 test_log_scope.go:80: use -show-logs to present logs inline === CONT TestChangefeedPrimaryKeyChangeWorksWithMultipleTables changefeed_test.go:5421: -- test log scope end -- --- FAIL: TestChangefeedPrimaryKeyChangeWorksWithMultipleTables (5.54s) === RUN TestChangefeedPrimaryKeyChangeWorksWithMultipleTables/sinkless helpers_test.go:716: making server as system tenant helpers_test.go:803: making sinkless feed factory changefeed_test.go:5414: Error Trace: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/4065/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/ccl/changefeedccl/changefeedccl_test_/changefeedccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/changefeedccl/helpers_test.go:190 /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/4065/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/ccl/changefeedccl/changefeedccl_test_/changefeedccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/changefeedccl/helpers_test.go:258 /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/4065/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/ccl/changefeedccl/changefeedccl_test_/changefeedccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/changefeedccl/changefeed_test.go:5414 /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/4065/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/ccl/changefeedccl/changefeedccl_test_/changefeedccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/changefeedccl/helpers_test.go:839 /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/4065/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/ccl/changefeedccl/changefeedccl_test_/changefeedccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/changefeedccl/helpers_test.go:867 Error: Received unexpected error: ERROR: context canceled (SQLSTATE XXUUU) Test: TestChangefeedPrimaryKeyChangeWorksWithMultipleTables/sinkless --- FAIL: TestChangefeedPrimaryKeyChangeWorksWithMultipleTables/sinkless (5.39s) ``` <p>Parameters: <code>TAGS=bazel,gss,deadlock</code> </p> <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> /cc @cockroachdb/cdc <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestChangefeedPrimaryKeyChangeWorksWithMultipleTables.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-18899 Epic CRDB-11732
1.0
ccl/changefeedccl: TestChangefeedPrimaryKeyChangeWorksWithMultipleTables failed - ccl/changefeedccl.TestChangefeedPrimaryKeyChangeWorksWithMultipleTables [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/6213858?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/6213858?buildTab=artifacts#/) on master @ [003c0360de8b64319b5f0f127b99be91dbdca8a3](https://github.com/cockroachdb/cockroach/commits/003c0360de8b64319b5f0f127b99be91dbdca8a3): ``` === RUN TestChangefeedPrimaryKeyChangeWorksWithMultipleTables test_log_scope.go:162: test logs captured to: /artifacts/tmp/_tmp/a77002d7c9453d7cd2d382f907780e13/logTestChangefeedPrimaryKeyChangeWorksWithMultipleTables3076719986 test_log_scope.go:80: use -show-logs to present logs inline === CONT TestChangefeedPrimaryKeyChangeWorksWithMultipleTables changefeed_test.go:5421: -- test log scope end -- --- FAIL: TestChangefeedPrimaryKeyChangeWorksWithMultipleTables (5.54s) === RUN TestChangefeedPrimaryKeyChangeWorksWithMultipleTables/sinkless helpers_test.go:716: making server as system tenant helpers_test.go:803: making sinkless feed factory changefeed_test.go:5414: Error Trace: /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/4065/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/ccl/changefeedccl/changefeedccl_test_/changefeedccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/changefeedccl/helpers_test.go:190 /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/4065/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/ccl/changefeedccl/changefeedccl_test_/changefeedccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/changefeedccl/helpers_test.go:258 /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/4065/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/ccl/changefeedccl/changefeedccl_test_/changefeedccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/changefeedccl/changefeed_test.go:5414 /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/4065/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/ccl/changefeedccl/changefeedccl_test_/changefeedccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/changefeedccl/helpers_test.go:839 /home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/4065/execroot/com_github_cockroachdb_cockroach/bazel-out/k8-dbg/bin/pkg/ccl/changefeedccl/changefeedccl_test_/changefeedccl_test.runfiles/com_github_cockroachdb_cockroach/pkg/ccl/changefeedccl/helpers_test.go:867 Error: Received unexpected error: ERROR: context canceled (SQLSTATE XXUUU) Test: TestChangefeedPrimaryKeyChangeWorksWithMultipleTables/sinkless --- FAIL: TestChangefeedPrimaryKeyChangeWorksWithMultipleTables/sinkless (5.39s) ``` <p>Parameters: <code>TAGS=bazel,gss,deadlock</code> </p> <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> /cc @cockroachdb/cdc <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestChangefeedPrimaryKeyChangeWorksWithMultipleTables.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-18899 Epic CRDB-11732
non_process
ccl changefeedccl testchangefeedprimarykeychangeworkswithmultipletables failed ccl changefeedccl testchangefeedprimarykeychangeworkswithmultipletables with on master run testchangefeedprimarykeychangeworkswithmultipletables test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline cont testchangefeedprimarykeychangeworkswithmultipletables changefeed test go test log scope end fail testchangefeedprimarykeychangeworkswithmultipletables run testchangefeedprimarykeychangeworkswithmultipletables sinkless helpers test go making server as system tenant helpers test go making sinkless feed factory changefeed test go error trace home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out dbg bin pkg ccl changefeedccl changefeedccl test changefeedccl test runfiles com github cockroachdb cockroach pkg ccl changefeedccl helpers test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out dbg bin pkg ccl changefeedccl changefeedccl test changefeedccl test runfiles com github cockroachdb cockroach pkg ccl changefeedccl helpers test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out dbg bin pkg ccl changefeedccl changefeedccl test changefeedccl test runfiles com github cockroachdb cockroach pkg ccl changefeedccl changefeed test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out dbg bin pkg ccl changefeedccl changefeedccl test changefeedccl test runfiles com github cockroachdb cockroach pkg ccl changefeedccl helpers test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot com github cockroachdb cockroach bazel out dbg bin pkg ccl changefeedccl changefeedccl test changefeedccl test runfiles com github cockroachdb cockroach pkg ccl changefeedccl helpers test go error received unexpected error error context canceled sqlstate xxuuu test testchangefeedprimarykeychangeworkswithmultipletables sinkless fail testchangefeedprimarykeychangeworkswithmultipletables sinkless parameters tags bazel gss deadlock help see also cc cockroachdb cdc jira issue crdb epic crdb
0
19,123
25,171,958,326
IssuesEvent
2022-11-11 04:44:36
emily-writes-poems/emily-writes-poems-processing
https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing
closed
separate tabs for each section
processing refinement
poem, details, collection, feature may possibly involve using React
1.0
separate tabs for each section - poem, details, collection, feature may possibly involve using React
process
separate tabs for each section poem details collection feature may possibly involve using react
1
24,559
5,081,341,177
IssuesEvent
2016-12-29 09:50:17
QualiSystems/Azure-Shell
https://api.github.com/repos/QualiSystems/Azure-Shell
closed
Cleanup attribute names and descriptions (data model finalizing)
Documentation Documentation-Done Test Plan Ready
### Redundant attributes that should be removed on Deployment Service: - Outbound Ports ### Redundant attributes that should be removed on Cloud Provider resource: - Azure Mgmt Network ID - Keypairs Location - Storage Type - AZURE MGMT VNET ### Cloud Provider resource attribute type changes: - Azure Secret - change attribute type to Password ### Final attribute names and descriptions: http://confluence.quali.com/pages/viewpage.action?pageId=18514287 (in progess)
2.0
Cleanup attribute names and descriptions (data model finalizing) - ### Redundant attributes that should be removed on Deployment Service: - Outbound Ports ### Redundant attributes that should be removed on Cloud Provider resource: - Azure Mgmt Network ID - Keypairs Location - Storage Type - AZURE MGMT VNET ### Cloud Provider resource attribute type changes: - Azure Secret - change attribute type to Password ### Final attribute names and descriptions: http://confluence.quali.com/pages/viewpage.action?pageId=18514287 (in progess)
non_process
cleanup attribute names and descriptions data model finalizing redundant attributes that should be removed on deployment service outbound ports redundant attributes that should be removed on cloud provider resource azure mgmt network id keypairs location storage type azure mgmt vnet cloud provider resource attribute type changes azure secret change attribute type to password final attribute names and descriptions in progess
0
280,497
8,682,664,117
IssuesEvent
2018-12-02 10:54:48
bounswe/bounswe2018group3
https://api.github.com/repos/bounswe/bounswe2018group3
opened
Redirect E-mail Confirmation page to the Frontend from the Backend
Backend difficulty : medium priority : low type : enhancement
When e-mail confirmation link is used, the user arrives at the backend endpoint of this function. This should be changed.
1.0
Redirect E-mail Confirmation page to the Frontend from the Backend - When e-mail confirmation link is used, the user arrives at the backend endpoint of this function. This should be changed.
non_process
redirect e mail confirmation page to the frontend from the backend when e mail confirmation link is used the user arrives at the backend endpoint of this function this should be changed
0
93,083
19,074,621,308
IssuesEvent
2021-11-27 14:46:57
F-star/fstar-blog-comment
https://api.github.com/repos/F-star/fstar-blog-comment
opened
【算法题】最大连续子序和 | fstar
Gitalk /posts/leetcode-maximum-subarray-solve/
https://blog.fstars.wang/posts/leetcode-maximum-subarray-solve/ 一道 LeetCode 的动态规划题的分析。 题目描述 题目来源 leetCode——53.最大子序和 给定一个整数数组 nums ,找到一个具有最大和的连续子数组(子数组最少包含一个元素),返回其最大和。 示例: 输入: [-2,1,-3,4,-1,2,1,-5,4], 输出: 6 解释: 连…
1.0
【算法题】最大连续子序和 | fstar - https://blog.fstars.wang/posts/leetcode-maximum-subarray-solve/ 一道 LeetCode 的动态规划题的分析。 题目描述 题目来源 leetCode——53.最大子序和 给定一个整数数组 nums ,找到一个具有最大和的连续子数组(子数组最少包含一个元素),返回其最大和。 示例: 输入: [-2,1,-3,4,-1,2,1,-5,4], 输出: 6 解释: 连…
non_process
【算法题】最大连续子序和 fstar 一道 leetcode 的动态规划题的分析。 题目描述 题目来源 leetcode—— 最大子序和 给定一个整数数组 nums ,找到一个具有最大和的连续子数组(子数组最少包含一个元素),返回其最大和。 示例 输入 输出 解释 连…
0
3,264
6,343,003,580
IssuesEvent
2017-07-27 16:38:03
w3c/vc-data-model
https://api.github.com/repos/w3c/vc-data-model
closed
List all validity checks that should be performed
privacy ValidationProcess
We should make it clear that an inspector (e.g. corporation) is required to check revocation via the Issuer (e.g. government). In fact, there are many different types of validity checks that should be performed on data, such as the revocation status of an Issuer's signing key, the expiration date of the claim, etc.
1.0
List all validity checks that should be performed - We should make it clear that an inspector (e.g. corporation) is required to check revocation via the Issuer (e.g. government). In fact, there are many different types of validity checks that should be performed on data, such as the revocation status of an Issuer's signing key, the expiration date of the claim, etc.
process
list all validity checks that should be performed we should make it clear that an inspector e g corporation is required to check revocation via the issuer e g government in fact there are many different types of validity checks that should be performed on data such as the revocation status of an issuer s signing key the expiration date of the claim etc
1
18,690
24,595,097,070
IssuesEvent
2022-10-14 07:37:25
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[DID] Console > Duplicate patient records are getting created when same participant provides the responses for the multiple activities
Bug P1 Response datastore Process: Fixed Process: Tested QA Process: Tested dev
AR: Console > Duplicate patient records are getting created when the same participant provides the responses for the multiple activities ER: Single patient record should be there for each participant ![image](https://user-images.githubusercontent.com/71445210/155280477-1f5e0d48-a051-4e9e-8b55-3cd23c976638.png)
3.0
[DID] Console > Duplicate patient records are getting created when same participant provides the responses for the multiple activities - AR: Console > Duplicate patient records are getting created when the same participant provides the responses for the multiple activities ER: Single patient record should be there for each participant ![image](https://user-images.githubusercontent.com/71445210/155280477-1f5e0d48-a051-4e9e-8b55-3cd23c976638.png)
process
console duplicate patient records are getting created when same participant provides the responses for the multiple activities ar console duplicate patient records are getting created when the same participant provides the responses for the multiple activities er single patient record should be there for each participant
1
46,632
11,863,239,267
IssuesEvent
2020-03-25 19:20:52
angular/angular-cli
https://api.github.com/repos/angular/angular-cli
closed
ng build - ES5 bundles -> Unknown helper createSuper
comp: devkit/build-angular freq1: low severity3: broken type: bug/fix
<!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 Oh hi there! 😄 To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅--> # 🐞 Bug report ### Command (mark with an `x`) <!-- Can you pin-point the command or commands that are effected by this bug? --> <!-- ✍️edit: --> - [ ] new - [x] build - [ ] serve - [ ] test - [ ] e2e - [ ] generate - [ ] add - [ ] update - [ ] lint - [ ] xi18n - [ ] run - [ ] config - [ ] help - [ ] version - [ ] doc ### Is this a regression? <!-- Did this behavior use to work in the previous version? --> <!-- ✍️--> Yes, the previous version in which this bug was not present was: 9.0.6 ### Description Any `ng build` ends up with `An unhandled exception occurred: C:\my\path\main-es2015.js: Unknown helper createSuper` ## 🔬 Minimal Reproduction <!-- Simple steps to reproduce this bug. Please include: commands run (including args), packages added, related code changes. If reproduction steps are not enough for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue. A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem. Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior. Issues that don't have enough info and can't be reproduced will be closed. You can read more about issue submission guidelines here: https://github.com/angular/angular-cli/blob/master/CONTRIBUTING.md#-submitting-an-issue --> `ng build` or `ng build--prod` ## 🔥 Exception or Error <pre><code> Generating ES5 bundles for differential loading... An unhandled exception occurred: C:\my\path\main-es2015.js: Unknown helper createSuper See "C:\Users\Me\AppData\Local\Temp\ng-sYZMmM\angular-errors.log" for further details. </code></pre> and the angular-errors.log: <pre> [error] ReferenceError: C:\my\path\main-es2015.js: Unknown helper createSuper at loadHelper (C:\my\path\node_modules\@babel\helpers\lib\index.js:225:27) at Object.ensure (C:\my\path\node_modules\@babel\helpers\lib\index.js:270:3) at File.addHelper (C:\my\path\node_modules\@babel\core\lib\transformation\file\file.js:203:15) at pushInheritsToBody (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:471:384) at pushConstructorToBody (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:461:5) at pushConstructor (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:449:5) at pushBody (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:187:11) at buildBody (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:133:5) at classTransformer (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:537:5) at transformClass (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:573:10) at PluginPass.ClassExpression (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\index.js:63:54) at newFn (C:\my\path\node_modules\@babel\traverse\lib\visitors.js:179:21) at NodePath._call (C:\my\path\node_modules\@babel\traverse\lib\path\context.js:55:20) at NodePath.call (C:\my\path\node_modules\@babel\traverse\lib\path\context.js:42:17) at NodePath.visit (C:\my\path\node_modules\@babel\traverse\lib\path\context.js:90:31) at TraversalContext.visitQueue (C:\my\path\node_modules\@babel\traverse\lib\context.js:112:16) </pre> ## 🌍 Your Environment <pre><code> Angular CLI: 9.0.7 Node: 12.3.1 OS: win32 x64 Angular: 9.0.7 ... animations, cli, common, compiler, compiler-cli, core, forms ... language-service, localize, platform-browser ... platform-browser-dynamic, router Ivy Workspace: Yes Package Version ----------------------------------------------------------- @angular-devkit/architect 0.900.7 @angular-devkit/build-angular 0.900.7 @angular-devkit/build-optimizer 0.900.7 @angular-devkit/build-webpack 0.900.7 @angular-devkit/core 9.0.7 @angular-devkit/schematics 9.0.7 @angular/cdk 9.1.3 @angular/flex-layout 9.0.0-beta.29 @angular/material 9.1.3 @ngtools/webpack 9.0.7 @schematics/angular 9.0.7 @schematics/update 0.900.7 rxjs 6.5.4 typescript 3.7.5 webpack 4.41.2 </code></pre> **Anything else relevant?** Tried to install @babel/compat-data (mentioned [here](https://github.com/angular/angular-cli/issues/17262)) as it seems babel related , did not help EDIT: Downgrading to <pre> "@angular-devkit/build-angular": "^0.900.6", "@angular/cli": "^9.0.6", "@angular/compiler-cli": "^9.0.6", </pre> which returned the environment to: <pre> Angular CLI: 9.0.6 Node: 12.3.1 OS: win32 x64 Angular: 9.0.7 ... animations, common, compiler, core, forms, language-service ... localize, platform-browser, platform-browser-dynamic, router Ivy Workspace: Yes Package Version ----------------------------------------------------------- @angular-devkit/architect 0.900.6 @angular-devkit/build-angular 0.900.6 @angular-devkit/build-optimizer 0.900.6 @angular-devkit/build-webpack 0.900.6 @angular-devkit/core 9.0.6 @angular-devkit/schematics 9.0.6 @angular/cdk 9.1.3 @angular/cli 9.0.6 @angular/compiler-cli 9.0.6 @angular/flex-layout 9.0.0-beta.29 @angular/material 9.1.3 @ngtools/webpack 9.0.6 @schematics/angular 9.0.6 @schematics/update 0.900.6 rxjs 6.5.4 typescript 3.7.5 webpack 4.41.2 </pre> returned the functionality of `ng build` back.
1.0
ng build - ES5 bundles -> Unknown helper createSuper - <!--🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 Oh hi there! 😄 To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅--> # 🐞 Bug report ### Command (mark with an `x`) <!-- Can you pin-point the command or commands that are effected by this bug? --> <!-- ✍️edit: --> - [ ] new - [x] build - [ ] serve - [ ] test - [ ] e2e - [ ] generate - [ ] add - [ ] update - [ ] lint - [ ] xi18n - [ ] run - [ ] config - [ ] help - [ ] version - [ ] doc ### Is this a regression? <!-- Did this behavior use to work in the previous version? --> <!-- ✍️--> Yes, the previous version in which this bug was not present was: 9.0.6 ### Description Any `ng build` ends up with `An unhandled exception occurred: C:\my\path\main-es2015.js: Unknown helper createSuper` ## 🔬 Minimal Reproduction <!-- Simple steps to reproduce this bug. Please include: commands run (including args), packages added, related code changes. If reproduction steps are not enough for reproduction of your issue, please create a minimal GitHub repository with the reproduction of the issue. A good way to make a minimal reproduction is to create a new app via `ng new repro-app` and add the minimum possible code to show the problem. Share the link to the repo below along with step-by-step instructions to reproduce the problem, as well as expected and actual behavior. Issues that don't have enough info and can't be reproduced will be closed. You can read more about issue submission guidelines here: https://github.com/angular/angular-cli/blob/master/CONTRIBUTING.md#-submitting-an-issue --> `ng build` or `ng build--prod` ## 🔥 Exception or Error <pre><code> Generating ES5 bundles for differential loading... An unhandled exception occurred: C:\my\path\main-es2015.js: Unknown helper createSuper See "C:\Users\Me\AppData\Local\Temp\ng-sYZMmM\angular-errors.log" for further details. </code></pre> and the angular-errors.log: <pre> [error] ReferenceError: C:\my\path\main-es2015.js: Unknown helper createSuper at loadHelper (C:\my\path\node_modules\@babel\helpers\lib\index.js:225:27) at Object.ensure (C:\my\path\node_modules\@babel\helpers\lib\index.js:270:3) at File.addHelper (C:\my\path\node_modules\@babel\core\lib\transformation\file\file.js:203:15) at pushInheritsToBody (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:471:384) at pushConstructorToBody (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:461:5) at pushConstructor (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:449:5) at pushBody (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:187:11) at buildBody (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:133:5) at classTransformer (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:537:5) at transformClass (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\transformClass.js:573:10) at PluginPass.ClassExpression (C:\my\path\node_modules\@babel\plugin-transform-classes\lib\index.js:63:54) at newFn (C:\my\path\node_modules\@babel\traverse\lib\visitors.js:179:21) at NodePath._call (C:\my\path\node_modules\@babel\traverse\lib\path\context.js:55:20) at NodePath.call (C:\my\path\node_modules\@babel\traverse\lib\path\context.js:42:17) at NodePath.visit (C:\my\path\node_modules\@babel\traverse\lib\path\context.js:90:31) at TraversalContext.visitQueue (C:\my\path\node_modules\@babel\traverse\lib\context.js:112:16) </pre> ## 🌍 Your Environment <pre><code> Angular CLI: 9.0.7 Node: 12.3.1 OS: win32 x64 Angular: 9.0.7 ... animations, cli, common, compiler, compiler-cli, core, forms ... language-service, localize, platform-browser ... platform-browser-dynamic, router Ivy Workspace: Yes Package Version ----------------------------------------------------------- @angular-devkit/architect 0.900.7 @angular-devkit/build-angular 0.900.7 @angular-devkit/build-optimizer 0.900.7 @angular-devkit/build-webpack 0.900.7 @angular-devkit/core 9.0.7 @angular-devkit/schematics 9.0.7 @angular/cdk 9.1.3 @angular/flex-layout 9.0.0-beta.29 @angular/material 9.1.3 @ngtools/webpack 9.0.7 @schematics/angular 9.0.7 @schematics/update 0.900.7 rxjs 6.5.4 typescript 3.7.5 webpack 4.41.2 </code></pre> **Anything else relevant?** Tried to install @babel/compat-data (mentioned [here](https://github.com/angular/angular-cli/issues/17262)) as it seems babel related , did not help EDIT: Downgrading to <pre> "@angular-devkit/build-angular": "^0.900.6", "@angular/cli": "^9.0.6", "@angular/compiler-cli": "^9.0.6", </pre> which returned the environment to: <pre> Angular CLI: 9.0.6 Node: 12.3.1 OS: win32 x64 Angular: 9.0.7 ... animations, common, compiler, core, forms, language-service ... localize, platform-browser, platform-browser-dynamic, router Ivy Workspace: Yes Package Version ----------------------------------------------------------- @angular-devkit/architect 0.900.6 @angular-devkit/build-angular 0.900.6 @angular-devkit/build-optimizer 0.900.6 @angular-devkit/build-webpack 0.900.6 @angular-devkit/core 9.0.6 @angular-devkit/schematics 9.0.6 @angular/cdk 9.1.3 @angular/cli 9.0.6 @angular/compiler-cli 9.0.6 @angular/flex-layout 9.0.0-beta.29 @angular/material 9.1.3 @ngtools/webpack 9.0.6 @schematics/angular 9.0.6 @schematics/update 0.900.6 rxjs 6.5.4 typescript 3.7.5 webpack 4.41.2 </pre> returned the functionality of `ng build` back.
non_process
ng build bundles unknown helper createsuper 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 oh hi there 😄 to expedite issue processing please search open and closed issues before submitting a new one existing issues often contain information about workarounds resolution or progress updates 🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅🔅 🐞 bug report command mark with an x new build serve test generate add update lint run config help version doc is this a regression yes the previous version in which this bug was not present was description any ng build ends up with an unhandled exception occurred c my path main js unknown helper createsuper 🔬 minimal reproduction simple steps to reproduce this bug please include commands run including args packages added related code changes if reproduction steps are not enough for reproduction of your issue please create a minimal github repository with the reproduction of the issue a good way to make a minimal reproduction is to create a new app via ng new repro app and add the minimum possible code to show the problem share the link to the repo below along with step by step instructions to reproduce the problem as well as expected and actual behavior issues that don t have enough info and can t be reproduced will be closed you can read more about issue submission guidelines here ng build or ng build prod 🔥 exception or error generating bundles for differential loading an unhandled exception occurred c my path main js unknown helper createsuper see c users me appdata local temp ng syzmmm angular errors log for further details and the angular errors log referenceerror c my path main js unknown helper createsuper at loadhelper c my path node modules babel helpers lib index js at object ensure c my path node modules babel helpers lib index js at file addhelper c my path node modules babel core lib transformation file file js at pushinheritstobody c my path node modules babel plugin transform classes lib transformclass js at pushconstructortobody c my path node modules babel plugin transform classes lib transformclass js at pushconstructor c my path node modules babel plugin transform classes lib transformclass js at pushbody c my path node modules babel plugin transform classes lib transformclass js at buildbody c my path node modules babel plugin transform classes lib transformclass js at classtransformer c my path node modules babel plugin transform classes lib transformclass js at transformclass c my path node modules babel plugin transform classes lib transformclass js at pluginpass classexpression c my path node modules babel plugin transform classes lib index js at newfn c my path node modules babel traverse lib visitors js at nodepath call c my path node modules babel traverse lib path context js at nodepath call c my path node modules babel traverse lib path context js at nodepath visit c my path node modules babel traverse lib path context js at traversalcontext visitqueue c my path node modules babel traverse lib context js 🌍 your environment angular cli node os angular animations cli common compiler compiler cli core forms language service localize platform browser platform browser dynamic router ivy workspace yes package version angular devkit architect angular devkit build angular angular devkit build optimizer angular devkit build webpack angular devkit core angular devkit schematics angular cdk angular flex layout beta angular material ngtools webpack schematics angular schematics update rxjs typescript webpack anything else relevant tried to install babel compat data mentioned as it seems babel related did not help edit downgrading to angular devkit build angular angular cli angular compiler cli which returned the environment to angular cli node os angular animations common compiler core forms language service localize platform browser platform browser dynamic router ivy workspace yes package version angular devkit architect angular devkit build angular angular devkit build optimizer angular devkit build webpack angular devkit core angular devkit schematics angular cdk angular cli angular compiler cli angular flex layout beta angular material ngtools webpack schematics angular schematics update rxjs typescript webpack returned the functionality of ng build back
0
2,571
7,966,084,860
IssuesEvent
2018-07-14 17:18:14
LabOfOz/SeeClarke
https://api.github.com/repos/LabOfOz/SeeClarke
opened
Let's reduce the 1MB file size!
Architecture polish
It looks like we may be able to reduce the file size by using import vs require. Let's set it up so that everything in `/src` is `required` and any dependencies are `imported` ![image](https://user-images.githubusercontent.com/36643175/42726789-0073d208-874f-11e8-90ab-161280496dcf.png)
1.0
Let's reduce the 1MB file size! - It looks like we may be able to reduce the file size by using import vs require. Let's set it up so that everything in `/src` is `required` and any dependencies are `imported` ![image](https://user-images.githubusercontent.com/36643175/42726789-0073d208-874f-11e8-90ab-161280496dcf.png)
non_process
let s reduce the file size it looks like we may be able to reduce the file size by using import vs require let s set it up so that everything in src is required and any dependencies are imported
0
4,736
7,594,722,912
IssuesEvent
2018-04-27 00:48:49
SafeNetConsulting-Milwaukee/feedbot
https://api.github.com/repos/SafeNetConsulting-Milwaukee/feedbot
closed
Consider using Danger JS to formalize contribution process
process
This may be more overhead than we end up needing, but Danger JS (https://github.com/danger/danger-js, http://danger.systems/js/) looks really good for enforcing consistency with each contribution. You can set requirements about how code should be structured, how pull requests should be tagged, if tests should be included, etc. It looks like it's a lot more than we need right now, but especially if we ever get to a point where we need to integrate with an external product management tool, this would probably be the way to enforce that.
1.0
Consider using Danger JS to formalize contribution process - This may be more overhead than we end up needing, but Danger JS (https://github.com/danger/danger-js, http://danger.systems/js/) looks really good for enforcing consistency with each contribution. You can set requirements about how code should be structured, how pull requests should be tagged, if tests should be included, etc. It looks like it's a lot more than we need right now, but especially if we ever get to a point where we need to integrate with an external product management tool, this would probably be the way to enforce that.
process
consider using danger js to formalize contribution process this may be more overhead than we end up needing but danger js looks really good for enforcing consistency with each contribution you can set requirements about how code should be structured how pull requests should be tagged if tests should be included etc it looks like it s a lot more than we need right now but especially if we ever get to a point where we need to integrate with an external product management tool this would probably be the way to enforce that
1
2,997
5,971,006,582
IssuesEvent
2017-05-31 00:42:39
IIIF/iiif.io
https://api.github.com/repos/IIIF/iiif.io
opened
Should we require CLAs for PRs?
process
A project I'm associated with via the Getty has started using CLAs, with a nice little integration: https://cla-assistant.io/ I forget if we've discussed CLAs, but (a) should we require them? and (b) should we use the integration?
1.0
Should we require CLAs for PRs? - A project I'm associated with via the Getty has started using CLAs, with a nice little integration: https://cla-assistant.io/ I forget if we've discussed CLAs, but (a) should we require them? and (b) should we use the integration?
process
should we require clas for prs a project i m associated with via the getty has started using clas with a nice little integration i forget if we ve discussed clas but a should we require them and b should we use the integration
1
9,010
12,123,015,318
IssuesEvent
2020-04-22 12:01:14
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
load database layers after running algorithm
Feature Request Feedback Processing
Author Name: **Paolo Cavallini** (@pcav) Original Redmine Issue: [13123](https://issues.qgis.org/issues/13123) Redmine category:processing/core --- There are ways that allow Processing to produce an output in a database instead of a file. Those are not automatically loaded after the tool finishes to run as files are. It would be nice if those could be automatically loaded in the project as file outputs.
1.0
load database layers after running algorithm - Author Name: **Paolo Cavallini** (@pcav) Original Redmine Issue: [13123](https://issues.qgis.org/issues/13123) Redmine category:processing/core --- There are ways that allow Processing to produce an output in a database instead of a file. Those are not automatically loaded after the tool finishes to run as files are. It would be nice if those could be automatically loaded in the project as file outputs.
process
load database layers after running algorithm author name paolo cavallini pcav original redmine issue redmine category processing core there are ways that allow processing to produce an output in a database instead of a file those are not automatically loaded after the tool finishes to run as files are it would be nice if those could be automatically loaded in the project as file outputs
1
22,541
31,711,954,718
IssuesEvent
2023-09-09 11:42:21
firebase/firebase-cpp-sdk
https://api.github.com/repos/firebase/firebase-cpp-sdk
reopened
[C++] Nightly Integration Testing Report for Firestore
type: process nightly-testing
<hidden value="build-dashboard-comment-start"></hidden> ### Testing History (last 7 days) | Date|Build vs Source Repo|Test vs Source Repo|SDK Packaging|Build vs SDK Package|Test vs SDK Package|Notes | |---|---|---|---|---|---|---| | 2023&#8209;09&#8209;02|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057689584)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057689584)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057211335)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6058318146)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6058318146)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;03|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063291788)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6064402493)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6064402493)|<details><summary>&nbsp;</summary>A test error in Firestore on iOS and a test flake in Firestore on iOS.</details> | | 2023&#8209;09&#8209;04|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6070750409)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;09&#8209;05|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6070750409)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;09&#8209;06|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6095613858)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6095613858)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6094380095)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6096984293)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6096984293)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;07|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6107060535)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6109859344)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6109859344)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;08|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6120382743)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6120382743)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6119252706)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6121559038)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6121559038)| | <details><summary>View extended history (last 30 days)</summary> ### Testing History (last 30 days) | Date|Build vs Source Repo|Test vs Source Repo|SDK Packaging|Build vs SDK Package|Test vs SDK Package|Notes | |---|---|---|---|---|---|---| | 2023&#8209;08&#8209;10|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5819685764)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5819685764)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5818524470)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5821078054)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5821078054)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;11|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5831321988)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5831321988)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5830303478)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5832470480)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5832470480)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;12|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5840693778)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5840693778)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5840227837)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5841241913)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5841241913)|<details><summary>&nbsp;</summary>A test error in Firestore on Windows and a test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;13|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5846664052)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5846664052)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5846168858)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5847284254)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5847284254)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android emulator.</details> | | 2023&#8209;08&#8209;14|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5854399937)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5854399937)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5853326172)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5855699942)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5855699942)|<details><summary>&nbsp;</summary>A test flake in Firestore on iOS.</details> | | 2023&#8209;08&#8209;15|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5865921590)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5865921590)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5864951827)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5866910779)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5866910779)|<details><summary>&nbsp;</summary>A test error in Firestore on Android and a test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;16|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5877382184)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5877382184)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5876217751)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5878566225)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5878566225)|<details><summary>&nbsp;</summary>A test error in Firestore on Android emulator, and test flakes in Firestore on Android and iOS.</details> | | 2023&#8209;08&#8209;17|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5889490566)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5889490566)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5888330349)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5890685333)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5890685333)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;18|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5901348428)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5901348428)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5900283505)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5902248114)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5902248114)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;19|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5910771268)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5910771268)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5910300801)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5911326992)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5911326992)|<details><summary>&nbsp;</summary>A test error in Firestore on Android emulator and a test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;20|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5916711222)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5916711222)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5916218957)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5917247229)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5917247229)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;08&#8209;21|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5924805872)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5924805872)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5923604350)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5926197853)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5926197853)|<details><summary>&nbsp;</summary>Missing build logs on iOS, missing test logs on Windows, and a test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;22|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5937325869)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5937325869)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5936094056)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5938885064)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5938885064)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;23|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5949876662)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5949876662)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5948653837)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5951296655)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5951296655)|<details><summary>&nbsp;</summary>A test error in Firestore on iOS and a test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;24|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5962282457)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5962282457)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5961080456)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5963469477)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5963469477)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;25|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5974558862)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5974558862)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5973438560)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5975931159)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5975931159)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;08&#8209;26|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984160583)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984160583)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5983683802)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984622291)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984622291)|<details><summary>&nbsp;</summary>A test error in Firestore on Android emulator and a test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;27|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5990240873)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5990240873)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5989731185)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5990772847)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5990772847)|<details><summary>&nbsp;</summary>A build error in Firestore on MacOS and a test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;28|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5998437566)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5998437566)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5997247104)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5999476369)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5999476369)|<details><summary>&nbsp;</summary>A test error in Firestore on Android, and test flakes in Firestore on Android and iOS.</details> | | 2023&#8209;08&#8209;29|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6010783180)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6010783180)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6009553519)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6011951912)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6011951912)|<details><summary>&nbsp;</summary>Test flakes in Firestore on Android emulator and iOS.</details> | | 2023&#8209;08&#8209;30|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6023419625)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6023419625)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6022201857)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6024700010)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6024700010)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;31|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6034662992)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6037254582)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6037254582)|<details><summary>&nbsp;</summary>Test flakes in Firestore on Android emulator and iOS.</details> | | 2023&#8209;09&#8209;01|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6048093300)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6048093300)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6046969807)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6049196143)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6049196143)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;09&#8209;02|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057689584)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057689584)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057211335)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6058318146)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6058318146)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;03|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063291788)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6064402493)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6064402493)|<details><summary>&nbsp;</summary>A test error in Firestore on iOS and a test flake in Firestore on iOS.</details> | | 2023&#8209;09&#8209;04|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6070750409)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;09&#8209;05|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6070750409)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;09&#8209;06|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6095613858)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6095613858)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6094380095)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6096984293)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6096984293)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;07|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6107060535)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6109859344)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6109859344)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;08|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6120382743)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6120382743)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6119252706)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6121559038)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6121559038)| | </details> <details><summary>Top 10 flakes/failures (last 30 days)</summary> | # | Latest | Product | Platform | Test Info | |---|---|---|---|---| | 20 | [1&nbsp;day&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | Firestore | Android | Crash or timeout&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5819685764) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5831321988) [3](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5840693778) [4](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5877382184) [5](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5889490566) [6](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5901348428) [7](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5916711222) [8](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5924805872) [9](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5937325869) [10](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5949876662) [11](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984160583) [12](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6010783180) [13](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6023419625) [14](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114) [15](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6048093300) [16](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057689584) [17](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400) [18](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6083097493) [19](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6095613858) [20](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | | 9 | [3&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6083097493) | Firestore | iOS | Crash or timeout&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5854399937) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5877382184) [3](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5916711222) [4](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5998437566) [5](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6010783180) [6](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114) [7](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6048093300) [8](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726) [9](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6083097493) | | 8 | [8&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114) | Firestore | Android | TaskTest.IsCompleteShouldReturnTrueForCanceledTask&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5819685764) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5846664052) [3](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5865921590) [4](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5910771268) [5](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5937325869) [6](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5949876662) [7](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5962282457) [8](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114) | | 5 | [11&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5998437566) | Firestore | Android | Unspecified test&nbsp;(failure)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5865921590) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5877382184) [3](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5910771268) [4](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984160583) [5](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5998437566) | | 2 | [5&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726) | Firestore | iOS | Unspecified test&nbsp;(failure)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5949876662) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726) | | 2 | [14&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5974558862) | Firestore | Android | WriteBatchTest.TestCannotUpdateNonexistentDocuments&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5924805872) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5974558862) | | 2 | [4&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400) | Firestore | iOS | TransactionTest.TestGetNonexistentDocumentThenCreate&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5846664052) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400) | | 1 | [1&nbsp;day&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | Firestore | Android | AggregateQuerySnapshotTest.IdenticalSnapshotFromCollectionQueriesShouldHaveSameHash&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | | 1 | [1&nbsp;day&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | Firestore | Android | SourceTest.GetNonExistingCollectionWhileOnlineWithSourceEqualToServer&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | | 1 | [3&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6083097493) | Firestore | Android | TransactionTest.TestRunsTransactionsAfterGettingNonexistentDoc&nbsp;(failure)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6083097493) | </details> <details><summary>📄</summary><pre> 2023-09-02 Pass Pass (flaky) Pass Pass Pass A test flake in Firestore on Android. 2023-09-03 Pass Failure Pass Pass Pass (flaky) A test error in Firestore on iOS and a test flake in Firestore on iOS. 2023-09-04 Pass Pass (flaky) Pass Failure Failure Test flakes in Firestore on iOS and Android. 2023-09-05 Pass Pass (flaky) Pass Failure Failure ''' 2023-09-06 Pass Pass (flaky) Pass Pass Pass A test flake in Firestore on Android. 2023-09-07 Pass Pass (flaky) Pass Pass Pass ''' 2023-09-08 Pass Pass Pass Pass Pass </pre></details> *** <hidden value="build-dashboard-comment-end"></hidden> <hidden value="integration-test-status-comment"></hidden> ### ✅&nbsp; [build against repo] Integration test succeeded! Requested by @sunmou99 on commit f55c43c2485e91c67a52ce60e1191fa3b0c4df61 Last updated: Fri Sep 8 04:42 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6120382743)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit f55c43c2485e91c67a52ce60e1191fa3b0c4df61 Last updated: Fri Sep 8 06:51 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6121559038)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against tip] Integration test succeeded! Requested by @sunmou99 on commit f55c43c2485e91c67a52ce60e1191fa3b0c4df61 Last updated: Sat Sep 9 04:35 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6130501553)**
1.0
[C++] Nightly Integration Testing Report for Firestore - <hidden value="build-dashboard-comment-start"></hidden> ### Testing History (last 7 days) | Date|Build vs Source Repo|Test vs Source Repo|SDK Packaging|Build vs SDK Package|Test vs SDK Package|Notes | |---|---|---|---|---|---|---| | 2023&#8209;09&#8209;02|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057689584)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057689584)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057211335)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6058318146)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6058318146)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;03|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063291788)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6064402493)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6064402493)|<details><summary>&nbsp;</summary>A test error in Firestore on iOS and a test flake in Firestore on iOS.</details> | | 2023&#8209;09&#8209;04|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6070750409)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;09&#8209;05|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6070750409)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;09&#8209;06|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6095613858)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6095613858)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6094380095)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6096984293)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6096984293)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;07|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6107060535)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6109859344)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6109859344)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;08|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6120382743)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6120382743)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6119252706)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6121559038)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6121559038)| | <details><summary>View extended history (last 30 days)</summary> ### Testing History (last 30 days) | Date|Build vs Source Repo|Test vs Source Repo|SDK Packaging|Build vs SDK Package|Test vs SDK Package|Notes | |---|---|---|---|---|---|---| | 2023&#8209;08&#8209;10|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5819685764)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5819685764)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5818524470)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5821078054)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5821078054)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;11|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5831321988)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5831321988)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5830303478)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5832470480)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5832470480)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;12|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5840693778)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5840693778)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5840227837)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5841241913)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5841241913)|<details><summary>&nbsp;</summary>A test error in Firestore on Windows and a test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;13|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5846664052)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5846664052)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5846168858)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5847284254)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5847284254)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android emulator.</details> | | 2023&#8209;08&#8209;14|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5854399937)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5854399937)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5853326172)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5855699942)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5855699942)|<details><summary>&nbsp;</summary>A test flake in Firestore on iOS.</details> | | 2023&#8209;08&#8209;15|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5865921590)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5865921590)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5864951827)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5866910779)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5866910779)|<details><summary>&nbsp;</summary>A test error in Firestore on Android and a test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;16|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5877382184)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5877382184)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5876217751)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5878566225)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5878566225)|<details><summary>&nbsp;</summary>A test error in Firestore on Android emulator, and test flakes in Firestore on Android and iOS.</details> | | 2023&#8209;08&#8209;17|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5889490566)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5889490566)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5888330349)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5890685333)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5890685333)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;18|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5901348428)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5901348428)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5900283505)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5902248114)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5902248114)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;19|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5910771268)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5910771268)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5910300801)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5911326992)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5911326992)|<details><summary>&nbsp;</summary>A test error in Firestore on Android emulator and a test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;20|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5916711222)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5916711222)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5916218957)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5917247229)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5917247229)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;08&#8209;21|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5924805872)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5924805872)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5923604350)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5926197853)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5926197853)|<details><summary>&nbsp;</summary>Missing build logs on iOS, missing test logs on Windows, and a test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;22|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5937325869)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5937325869)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5936094056)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5938885064)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5938885064)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;23|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5949876662)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5949876662)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5948653837)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5951296655)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5951296655)|<details><summary>&nbsp;</summary>A test error in Firestore on iOS and a test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;24|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5962282457)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5962282457)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5961080456)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5963469477)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5963469477)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;25|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5974558862)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5974558862)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5973438560)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5975931159)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5975931159)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;08&#8209;26|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984160583)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984160583)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5983683802)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984622291)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984622291)|<details><summary>&nbsp;</summary>A test error in Firestore on Android emulator and a test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;27|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5990240873)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5990240873)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5989731185)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5990772847)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5990772847)|<details><summary>&nbsp;</summary>A build error in Firestore on MacOS and a test flake in Firestore on Android emulator.</details> | | 2023&#8209;08&#8209;28|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5998437566)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5998437566)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5997247104)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5999476369)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5999476369)|<details><summary>&nbsp;</summary>A test error in Firestore on Android, and test flakes in Firestore on Android and iOS.</details> | | 2023&#8209;08&#8209;29|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6010783180)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6010783180)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6009553519)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6011951912)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6011951912)|<details><summary>&nbsp;</summary>Test flakes in Firestore on Android emulator and iOS.</details> | | 2023&#8209;08&#8209;30|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6023419625)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6023419625)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6022201857)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6024700010)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6024700010)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;08&#8209;31|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6034662992)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6037254582)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6037254582)|<details><summary>&nbsp;</summary>Test flakes in Firestore on Android emulator and iOS.</details> | | 2023&#8209;09&#8209;01|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6048093300)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6048093300)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6046969807)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6049196143)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6049196143)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;09&#8209;02|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057689584)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057689584)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057211335)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6058318146)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6058318146)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;03|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063291788)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6064402493)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6064402493)|<details><summary>&nbsp;</summary>A test error in Firestore on iOS and a test flake in Firestore on iOS.</details> | | 2023&#8209;09&#8209;04|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6070750409)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;09&#8209;05|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6070750409)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|[❌&nbsp;**Failure**](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6073395386)|<details><summary>&nbsp;</summary>Test flakes in Firestore on iOS and Android.</details> | | 2023&#8209;09&#8209;06|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6095613858)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6095613858)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6094380095)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6096984293)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6096984293)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;07|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055)|[✅&nbsp;Pass&nbsp;(flaky)](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6107060535)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6109859344)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6109859344)|<details><summary>&nbsp;</summary>A test flake in Firestore on Android.</details> | | 2023&#8209;09&#8209;08|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6120382743)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6120382743)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6119252706)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6121559038)|[✅&nbsp;Pass](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6121559038)| | </details> <details><summary>Top 10 flakes/failures (last 30 days)</summary> | # | Latest | Product | Platform | Test Info | |---|---|---|---|---| | 20 | [1&nbsp;day&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | Firestore | Android | Crash or timeout&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5819685764) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5831321988) [3](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5840693778) [4](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5877382184) [5](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5889490566) [6](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5901348428) [7](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5916711222) [8](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5924805872) [9](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5937325869) [10](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5949876662) [11](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984160583) [12](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6010783180) [13](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6023419625) [14](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114) [15](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6048093300) [16](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6057689584) [17](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400) [18](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6083097493) [19](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6095613858) [20](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | | 9 | [3&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6083097493) | Firestore | iOS | Crash or timeout&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5854399937) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5877382184) [3](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5916711222) [4](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5998437566) [5](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6010783180) [6](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114) [7](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6048093300) [8](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726) [9](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6083097493) | | 8 | [8&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114) | Firestore | Android | TaskTest.IsCompleteShouldReturnTrueForCanceledTask&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5819685764) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5846664052) [3](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5865921590) [4](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5910771268) [5](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5937325869) [6](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5949876662) [7](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5962282457) [8](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6035882114) | | 5 | [11&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5998437566) | Firestore | Android | Unspecified test&nbsp;(failure)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5865921590) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5877382184) [3](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5910771268) [4](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5984160583) [5](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5998437566) | | 2 | [5&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726) | Firestore | iOS | Unspecified test&nbsp;(failure)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5949876662) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6063803726) | | 2 | [14&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5974558862) | Firestore | Android | WriteBatchTest.TestCannotUpdateNonexistentDocuments&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5924805872) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5974558862) | | 2 | [4&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400) | Firestore | iOS | TransactionTest.TestGetNonexistentDocumentThenCreate&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5846664052) [2](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6071960400) | | 1 | [1&nbsp;day&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | Firestore | Android | AggregateQuerySnapshotTest.IdenticalSnapshotFromCollectionQueriesShouldHaveSameHash&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | | 1 | [1&nbsp;day&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | Firestore | Android | SourceTest.GetNonExistingCollectionWhileOnlineWithSourceEqualToServer&nbsp;(flaky)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6108288055) | | 1 | [3&nbsp;days&nbsp;ago](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6083097493) | Firestore | Android | TransactionTest.TestRunsTransactionsAfterGettingNonexistentDoc&nbsp;(failure)<br/>&nbsp;&nbsp;&nbsp;Logs: [1](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6083097493) | </details> <details><summary>📄</summary><pre> 2023-09-02 Pass Pass (flaky) Pass Pass Pass A test flake in Firestore on Android. 2023-09-03 Pass Failure Pass Pass Pass (flaky) A test error in Firestore on iOS and a test flake in Firestore on iOS. 2023-09-04 Pass Pass (flaky) Pass Failure Failure Test flakes in Firestore on iOS and Android. 2023-09-05 Pass Pass (flaky) Pass Failure Failure ''' 2023-09-06 Pass Pass (flaky) Pass Pass Pass A test flake in Firestore on Android. 2023-09-07 Pass Pass (flaky) Pass Pass Pass ''' 2023-09-08 Pass Pass Pass Pass Pass </pre></details> *** <hidden value="build-dashboard-comment-end"></hidden> <hidden value="integration-test-status-comment"></hidden> ### ✅&nbsp; [build against repo] Integration test succeeded! Requested by @sunmou99 on commit f55c43c2485e91c67a52ce60e1191fa3b0c4df61 Last updated: Fri Sep 8 04:42 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6120382743)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit f55c43c2485e91c67a52ce60e1191fa3b0c4df61 Last updated: Fri Sep 8 06:51 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6121559038)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against tip] Integration test succeeded! Requested by @sunmou99 on commit f55c43c2485e91c67a52ce60e1191fa3b0c4df61 Last updated: Sat Sep 9 04:35 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/6130501553)**
process
nightly integration testing report for firestore testing history last days date build vs source repo test vs source repo sdk packaging build vs sdk package test vs sdk package notes test flake in firestore on android test error in firestore on ios and a test flake in firestore on ios flakes in firestore on ios and android flakes in firestore on ios and android test flake in firestore on android test flake in firestore on android view extended history last days testing history last days date build vs source repo test vs source repo sdk packaging build vs sdk package test vs sdk package notes test flake in firestore on android test flake in firestore on android test error in firestore on windows and a test flake in firestore on android flakes in firestore on ios and android emulator test flake in firestore on ios test error in firestore on android and a test flake in firestore on android emulator test error in firestore on android emulator and test flakes in firestore on android and ios test flake in firestore on android test flake in firestore on android test error in firestore on android emulator and a test flake in firestore on android emulator flakes in firestore on ios and android build logs on ios missing test logs on windows and a test flake in firestore on android test flake in firestore on android test error in firestore on ios and a test flake in firestore on android emulator test flake in firestore on android emulator flakes in firestore on ios and android test error in firestore on android emulator and a test flake in firestore on android emulator build error in firestore on macos and a test flake in firestore on android emulator test error in firestore on android and test flakes in firestore on android and ios flakes in firestore on android emulator and ios test flake in firestore on android flakes in firestore on android emulator and ios flakes in firestore on ios and android test flake in firestore on android test error in firestore on ios and a test flake in firestore on ios flakes in firestore on ios and android flakes in firestore on ios and android test flake in firestore on android test flake in firestore on android top flakes failures last days latest product platform test info firestore android crash or timeout nbsp flaky nbsp nbsp nbsp logs firestore ios crash or timeout nbsp flaky nbsp nbsp nbsp logs firestore android tasktest iscompleteshouldreturntrueforcanceledtask nbsp flaky nbsp nbsp nbsp logs firestore android unspecified test nbsp failure nbsp nbsp nbsp logs firestore ios unspecified test nbsp failure nbsp nbsp nbsp logs firestore android writebatchtest testcannotupdatenonexistentdocuments nbsp flaky nbsp nbsp nbsp logs firestore ios transactiontest testgetnonexistentdocumentthencreate nbsp flaky nbsp nbsp nbsp logs firestore android aggregatequerysnapshottest identicalsnapshotfromcollectionqueriesshouldhavesamehash nbsp flaky nbsp nbsp nbsp logs firestore android sourcetest getnonexistingcollectionwhileonlinewithsourceequaltoserver nbsp flaky nbsp nbsp nbsp logs firestore android transactiontest testrunstransactionsaftergettingnonexistentdoc nbsp failure nbsp nbsp nbsp logs 📄 pass pass flaky pass pass pass a test flake in firestore on android pass failure pass pass pass flaky a test error in firestore on ios and a test flake in firestore on ios pass pass flaky pass failure failure test flakes in firestore on ios and android pass pass flaky pass failure failure pass pass flaky pass pass pass a test flake in firestore on android pass pass flaky pass pass pass pass pass pass pass pass ✅ nbsp integration test succeeded requested by on commit last updated fri sep pdt ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated fri sep pdt ✅ nbsp integration test succeeded requested by on commit last updated sat sep pdt
1
5,862
8,682,542,321
IssuesEvent
2018-12-02 09:33:38
linnovate/root
https://api.github.com/repos/linnovate/root
reopened
unable to delete files in folders tab
Fixed Process bug
after adding a file to a folder and trying to delete it, it still stays after switching tabs or refreshing the screen for example, i deleted this file and it still shows up after i refreshed the page ![image](https://user-images.githubusercontent.com/38312178/48780877-8cdfab00-ece3-11e8-8ad9-cf7753140b41.png)
1.0
unable to delete files in folders tab - after adding a file to a folder and trying to delete it, it still stays after switching tabs or refreshing the screen for example, i deleted this file and it still shows up after i refreshed the page ![image](https://user-images.githubusercontent.com/38312178/48780877-8cdfab00-ece3-11e8-8ad9-cf7753140b41.png)
process
unable to delete files in folders tab after adding a file to a folder and trying to delete it it still stays after switching tabs or refreshing the screen for example i deleted this file and it still shows up after i refreshed the page
1
12,499
14,961,475,627
IssuesEvent
2021-01-27 07:49:49
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Open study > Enrollment status is incorrectly displayed as 'Yet to enroll'
Bug P2 Participant manager datastore Process: Fixed
**Steps:** 1. Enroll into any open study successfully 2. WIthdraw from the study 3. Pass the eligibility test for the same study and don't provide consent 4. Observe the Enrollment status value in participant details page **Actual:** Enrollment status is incorrectly displayed as 'Yet to enroll' **Expected:** Enrollment status should display 'Withdrawn' Note: Issue observed in Enrollment registry page and participant details page ![Screenshot_3](https://user-images.githubusercontent.com/60386291/105465568-153daa00-5cb9-11eb-809c-d087adc6fb58.png)
1.0
[PM] Open study > Enrollment status is incorrectly displayed as 'Yet to enroll' - **Steps:** 1. Enroll into any open study successfully 2. WIthdraw from the study 3. Pass the eligibility test for the same study and don't provide consent 4. Observe the Enrollment status value in participant details page **Actual:** Enrollment status is incorrectly displayed as 'Yet to enroll' **Expected:** Enrollment status should display 'Withdrawn' Note: Issue observed in Enrollment registry page and participant details page ![Screenshot_3](https://user-images.githubusercontent.com/60386291/105465568-153daa00-5cb9-11eb-809c-d087adc6fb58.png)
process
open study enrollment status is incorrectly displayed as yet to enroll steps enroll into any open study successfully withdraw from the study pass the eligibility test for the same study and don t provide consent observe the enrollment status value in participant details page actual enrollment status is incorrectly displayed as yet to enroll expected enrollment status should display withdrawn note issue observed in enrollment registry page and participant details page
1
1,520
4,112,770,546
IssuesEvent
2016-06-07 11:52:11
nodejs/node
https://api.github.com/repos/nodejs/node
closed
Possible Windows Node 6 regression with spawn and a substed drive
child_process windows
* **Version**: v6.0.0 * **Platform**: Windows 32 and 64 bit * **Subsystem**: I've noticed a minor breaking change in either `child_process.spawn` or `path.resolve` (currently unclear), on Windows with Node 6. This issue appears to crop up when a Node script is execute from a `SUBST`'d path. The production steps are tricky so I have created a [reproduction in a repo][1] that runs its tests in [AppVeyor][2]. [1]: https://github.com/xzyfer/node-6-native-ext-bug [2]: https://ci.appveyor.com/project/xzyfer/node-6-native-ext-bug/build/job/g22imklu03vipomt
1.0
Possible Windows Node 6 regression with spawn and a substed drive - * **Version**: v6.0.0 * **Platform**: Windows 32 and 64 bit * **Subsystem**: I've noticed a minor breaking change in either `child_process.spawn` or `path.resolve` (currently unclear), on Windows with Node 6. This issue appears to crop up when a Node script is execute from a `SUBST`'d path. The production steps are tricky so I have created a [reproduction in a repo][1] that runs its tests in [AppVeyor][2]. [1]: https://github.com/xzyfer/node-6-native-ext-bug [2]: https://ci.appveyor.com/project/xzyfer/node-6-native-ext-bug/build/job/g22imklu03vipomt
process
possible windows node regression with spawn and a substed drive version platform windows and bit subsystem i ve noticed a minor breaking change in either child process spawn or path resolve currently unclear on windows with node this issue appears to crop up when a node script is execute from a subst d path the production steps are tricky so i have created a that runs its tests in
1
3,230
6,289,280,072
IssuesEvent
2017-07-19 18:51:20
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
System.Diagnostics.Process.MainModule.FileName has junk characters in the prefix on UAP
area-System.Diagnostics.Process
(Test case will be added soon, creating issue so that I can disable that in the PR) Junk characters might be BOM or Xunit display error ``` ERROR: System.Diagnostics.Tests.ProcessTests.TestMainModuleOnNonOSX [FAIL] Assert.EndsWith() Failure: Expected: xunit.runner.uap.exe Actual: ┬╖┬╖┬╖XUnit.Runner.Uap.exe ```
1.0
System.Diagnostics.Process.MainModule.FileName has junk characters in the prefix on UAP - (Test case will be added soon, creating issue so that I can disable that in the PR) Junk characters might be BOM or Xunit display error ``` ERROR: System.Diagnostics.Tests.ProcessTests.TestMainModuleOnNonOSX [FAIL] Assert.EndsWith() Failure: Expected: xunit.runner.uap.exe Actual: ┬╖┬╖┬╖XUnit.Runner.Uap.exe ```
process
system diagnostics process mainmodule filename has junk characters in the prefix on uap test case will be added soon creating issue so that i can disable that in the pr junk characters might be bom or xunit display error error system diagnostics tests processtests testmainmoduleonnonosx assert endswith failure expected xunit runner uap exe actual ┬╖┬╖┬╖xunit runner uap exe
1
20,414
27,073,317,890
IssuesEvent
2023-02-14 08:50:39
AvaloniaUI/Avalonia
https://api.github.com/repos/AvaloniaUI/Avalonia
closed
Text Selection affects justify in TextBox
bug area-textprocessing
ERROR: type should be string, got "\r\nhttps://user-images.githubusercontent.com/4997065/218300427-33eab39c-e679-45e7-9561-d343a1a38e5f.mov\r\n\r\nTextBox with TextWrapping=Wrap, Width=232, TextAlign=Justify.\r\n\r\nMacOS Ventura. 11.0.0-preview5"
1.0
Text Selection affects justify in TextBox - https://user-images.githubusercontent.com/4997065/218300427-33eab39c-e679-45e7-9561-d343a1a38e5f.mov TextBox with TextWrapping=Wrap, Width=232, TextAlign=Justify. MacOS Ventura. 11.0.0-preview5
process
text selection affects justify in textbox textbox with textwrapping wrap width textalign justify macos ventura
1
509,188
14,723,727,266
IssuesEvent
2021-01-06 01:02:58
threefoldtech/0-bootstrap
https://api.github.com/repos/threefoldtech/0-bootstrap
closed
boot: permission denied
priority_critical
Kernel download always fails with: Permission denied (0216eb3c) This is caused by Let's Encrypt who changed their root certificates. Bundle certificates needs to be updated.
1.0
boot: permission denied - Kernel download always fails with: Permission denied (0216eb3c) This is caused by Let's Encrypt who changed their root certificates. Bundle certificates needs to be updated.
non_process
boot permission denied kernel download always fails with permission denied this is caused by let s encrypt who changed their root certificates bundle certificates needs to be updated
0
164,487
13,943,512,843
IssuesEvent
2020-10-22 23:19:07
decred/dcrd
https://api.github.com/repos/decred/dcrd
closed
JSON-RPC API searchrawtransactions documentation max results
documentation
The [JSON-RPC API searchrawtransactions](https://github.com/decred/dcrd/blob/9547385fc04b2d27809f1fcc8ce19fb1e2c37291/docs/json_rpc_api.mediawiki#searchrawtransactions) documentation should specify the max number of results per request.
1.0
JSON-RPC API searchrawtransactions documentation max results - The [JSON-RPC API searchrawtransactions](https://github.com/decred/dcrd/blob/9547385fc04b2d27809f1fcc8ce19fb1e2c37291/docs/json_rpc_api.mediawiki#searchrawtransactions) documentation should specify the max number of results per request.
non_process
json rpc api searchrawtransactions documentation max results the documentation should specify the max number of results per request
0
271,674
29,659,340,316
IssuesEvent
2023-06-10 01:18:28
pazhanivel07/linux-4.19.72
https://api.github.com/repos/pazhanivel07/linux-4.19.72
closed
CVE-2022-3643 (Medium) detected in linuxlinux-4.19.83 - autoclosed
Mend: dependency security vulnerability
## CVE-2022-3643 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.83</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/linux-4.19.72/commit/ce28e4f7a922d93d9b737061ae46827305c8c30a">ce28e4f7a922d93d9b737061ae46827305c8c30a</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/xen-netback/netback.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Guests can trigger NIC interface reset/abort/crash via netback It is possible for a guest to trigger a NIC interface reset/abort/crash in a Linux based network backend by sending certain kinds of packets. It appears to be an (unwritten?) assumption in the rest of the Linux network stack that packet protocol headers are all contained within the linear section of the SKB and some NICs behave badly if this is not the case. This has been reported to occur with Cisco (enic) and Broadcom NetXtrem II BCM5780 (bnx2x) though it may be an issue with other NICs/drivers as well. In case the frontend is sending requests with split headers, netback will forward those violating above mentioned assumption to the networking core, resulting in said misbehavior. <p>Publish Date: 2022-12-07 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3643>CVE-2022-3643</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-3643">https://www.linuxkernelcves.com/cves/CVE-2022-3643</a></p> <p>Release Date: 2022-12-07</p> <p>Fix Resolution: v4.9.336,v4.14.302,v4.19.269,v5.4.227,v5.10.159,v5.15.83,v6.0.13,v6.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-3643 (Medium) detected in linuxlinux-4.19.83 - autoclosed - ## CVE-2022-3643 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.83</b></p></summary> <p> <p>Apache Software Foundation (ASF)</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/pazhanivel07/linux-4.19.72/commit/ce28e4f7a922d93d9b737061ae46827305c8c30a">ce28e4f7a922d93d9b737061ae46827305c8c30a</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/xen-netback/netback.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Guests can trigger NIC interface reset/abort/crash via netback It is possible for a guest to trigger a NIC interface reset/abort/crash in a Linux based network backend by sending certain kinds of packets. It appears to be an (unwritten?) assumption in the rest of the Linux network stack that packet protocol headers are all contained within the linear section of the SKB and some NICs behave badly if this is not the case. This has been reported to occur with Cisco (enic) and Broadcom NetXtrem II BCM5780 (bnx2x) though it may be an issue with other NICs/drivers as well. In case the frontend is sending requests with split headers, netback will forward those violating above mentioned assumption to the networking core, resulting in said misbehavior. <p>Publish Date: 2022-12-07 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3643>CVE-2022-3643</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-3643">https://www.linuxkernelcves.com/cves/CVE-2022-3643</a></p> <p>Release Date: 2022-12-07</p> <p>Fix Resolution: v4.9.336,v4.14.302,v4.19.269,v5.4.227,v5.10.159,v5.15.83,v6.0.13,v6.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files drivers net xen netback netback c vulnerability details guests can trigger nic interface reset abort crash via netback it is possible for a guest to trigger a nic interface reset abort crash in a linux based network backend by sending certain kinds of packets it appears to be an unwritten assumption in the rest of the linux network stack that packet protocol headers are all contained within the linear section of the skb and some nics behave badly if this is not the case this has been reported to occur with cisco enic and broadcom netxtrem ii though it may be an issue with other nics drivers as well in case the frontend is sending requests with split headers netback will forward those violating above mentioned assumption to the networking core resulting in said misbehavior publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope changed impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
664,927
22,292,787,286
IssuesEvent
2022-06-12 16:02:50
Jenesius/vue-form
https://api.github.com/repos/Jenesius/vue-form
closed
Configuration
low-priority
```js { debug?: boolean // В консоль будет выводиться вся информация, связанная с формой } ```
1.0
Configuration - ```js { debug?: boolean // В консоль будет выводиться вся информация, связанная с формой } ```
non_process
configuration js debug boolean в консоль будет выводиться вся информация связанная с формой
0
4,264
7,189,238,196
IssuesEvent
2018-02-02 13:19:52
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
Should store price per block
libs-etherlib status-inprocess type-enhancement
Data format change: search for if (false) { //opt.priceBlocks) { for an example of why we should store price at the block level when the block is first created
1.0
Should store price per block - Data format change: search for if (false) { //opt.priceBlocks) { for an example of why we should store price at the block level when the block is first created
process
should store price per block data format change search for if false opt priceblocks for an example of why we should store price at the block level when the block is first created
1
173,110
14,402,613,007
IssuesEvent
2020-12-03 15:06:16
rust-lang/crates.io
https://api.github.com/repos/rust-lang/crates.io
closed
Don't rewrite existing docs.rs links
A-documentation-metadata C-bug
Today I published two crates since winapi got update. https://crates.io/crates/windows-win https://crates.io/crates/clipboard-win In both cases crates.io ignores my package.documentation attribute and instead enforces own link to docs.rs Thanks to that instead of pointing to docs.rs for windows platform as I specified: ``` documentation = "https://docs.rs/windows-win/2.1.0/x86_64-pc-windows-msvc/windows_win/" documentation = "https://docs.rs/clipboard-win/2.1.0/x86_64-pc-windows-msvc/clipboard_win/" ``` It points to: ``` https://docs.rs/windows-win/2.1.0/ https://docs.rs/clipboard-win/2.1.1/ ``` It is not really nice since in past crates.io did not ignore this attribute and windows only crates need a way to point on windows platform docs as otherwise it points to empty docs. The same shit happens to winapi
1.0
Don't rewrite existing docs.rs links - Today I published two crates since winapi got update. https://crates.io/crates/windows-win https://crates.io/crates/clipboard-win In both cases crates.io ignores my package.documentation attribute and instead enforces own link to docs.rs Thanks to that instead of pointing to docs.rs for windows platform as I specified: ``` documentation = "https://docs.rs/windows-win/2.1.0/x86_64-pc-windows-msvc/windows_win/" documentation = "https://docs.rs/clipboard-win/2.1.0/x86_64-pc-windows-msvc/clipboard_win/" ``` It points to: ``` https://docs.rs/windows-win/2.1.0/ https://docs.rs/clipboard-win/2.1.1/ ``` It is not really nice since in past crates.io did not ignore this attribute and windows only crates need a way to point on windows platform docs as otherwise it points to empty docs. The same shit happens to winapi
non_process
don t rewrite existing docs rs links today i published two crates since winapi got update in both cases crates io ignores my package documentation attribute and instead enforces own link to docs rs thanks to that instead of pointing to docs rs for windows platform as i specified documentation documentation it points to it is not really nice since in past crates io did not ignore this attribute and windows only crates need a way to point on windows platform docs as otherwise it points to empty docs the same shit happens to winapi
0
18,400
24,537,256,680
IssuesEvent
2022-10-11 22:09:11
cagov/design-system
https://api.github.com/repos/cagov/design-system
opened
Remove issue templates
Process improvement
We only need 2 issue templates in GitHub. Need to remove all except General and Bug. ![Screen Shot 2022-10-11 at 3 06 33 PM](https://user-images.githubusercontent.com/98193284/195207229-074c7138-4b54-47e8-a47c-798a801381b0.png)
1.0
Remove issue templates - We only need 2 issue templates in GitHub. Need to remove all except General and Bug. ![Screen Shot 2022-10-11 at 3 06 33 PM](https://user-images.githubusercontent.com/98193284/195207229-074c7138-4b54-47e8-a47c-798a801381b0.png)
process
remove issue templates we only need issue templates in github need to remove all except general and bug
1
276,796
21,000,720,581
IssuesEvent
2022-03-29 17:10:14
Jacqueline192837/jcastillo_2a
https://api.github.com/repos/Jacqueline192837/jcastillo_2a
closed
Planning
documentation
**Descripción** Realizar el programa [2a.pdf](https://github.com/Jacqueline192837/jcastillo_2a/files/8343400/2a.pdf) usando los estándares: -R1 [OracleJavaCodeStandard.pdf](https://github.com/Jacqueline192837/jcastillo_2a/files/8343476/OracleJavaCodeStandard.pdf) -R2 [R2.pdf](https://github.com/Jacqueline192837/jcastillo_2a/files/8344907/R2.pdf) **Mockup** **Pruebas** At a minimum, test the program by counting the total program and part sizes in programs 1 and 2. Ejemplo salida: ![image](https://user-images.githubusercontent.com/80423291/159963302-0c00cab7-4d53-4bc4-8f94-c4634b432087.png)
1.0
Planning - **Descripción** Realizar el programa [2a.pdf](https://github.com/Jacqueline192837/jcastillo_2a/files/8343400/2a.pdf) usando los estándares: -R1 [OracleJavaCodeStandard.pdf](https://github.com/Jacqueline192837/jcastillo_2a/files/8343476/OracleJavaCodeStandard.pdf) -R2 [R2.pdf](https://github.com/Jacqueline192837/jcastillo_2a/files/8344907/R2.pdf) **Mockup** **Pruebas** At a minimum, test the program by counting the total program and part sizes in programs 1 and 2. Ejemplo salida: ![image](https://user-images.githubusercontent.com/80423291/159963302-0c00cab7-4d53-4bc4-8f94-c4634b432087.png)
non_process
planning descripción realizar el programa usando los estándares mockup pruebas at a minimum test the program by counting the total program and part sizes in programs and ejemplo salida
0
118,252
11,965,008,681
IssuesEvent
2020-04-05 21:41:47
Kalafut-organization/elephant_vending_machine_frontend
https://api.github.com/repos/Kalafut-organization/elephant_vending_machine_frontend
opened
Documentation not auto built
documentation
The GitHub pages hosted docs are currently outdated. It seems that as of right now we have to manually build them for each commit. I think we could use [this GitHub Action](https://github.com/marketplace/actions/deploy-to-github-pages) to remedy this.
1.0
Documentation not auto built - The GitHub pages hosted docs are currently outdated. It seems that as of right now we have to manually build them for each commit. I think we could use [this GitHub Action](https://github.com/marketplace/actions/deploy-to-github-pages) to remedy this.
non_process
documentation not auto built the github pages hosted docs are currently outdated it seems that as of right now we have to manually build them for each commit i think we could use to remedy this
0
16,704
21,843,220,392
IssuesEvent
2022-05-18 00:11:13
googleapis/nodejs-memcache
https://api.github.com/repos/googleapis/nodejs-memcache
closed
promote library to GA
type: process api: memcache
Package name: **@google-cloud/memcache** Current release: **beta** Proposed release: **GA** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [x] 28 days elapsed since last beta release with new API surface - [x] Server API is GA - [x] Package API is stable, and we can commit to backward compatibility - [x] All dependencies are GA ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [ ] Per-API README includes a full description of the API - [ ] Per-API README contains at least one “getting started” sample using the most common API scenario - [ ] Manual code has been reviewed by API producer - [ ] Manual code has been reviewed by a DPE responsible for samples - [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
1.0
promote library to GA - Package name: **@google-cloud/memcache** Current release: **beta** Proposed release: **GA** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [x] 28 days elapsed since last beta release with new API surface - [x] Server API is GA - [x] Package API is stable, and we can commit to backward compatibility - [x] All dependencies are GA ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [ ] Per-API README includes a full description of the API - [ ] Per-API README contains at least one “getting started” sample using the most common API scenario - [ ] Manual code has been reviewed by API producer - [ ] Manual code has been reviewed by a DPE responsible for samples - [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
process
promote library to ga package name google cloud memcache current release beta proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
1