Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
759,915
26,618,479,466
IssuesEvent
2023-01-24 09:23:43
quadratic-funding/mpc-phase2-suite
https://api.github.com/repos/quadratic-funding/mpc-phase2-suite
closed
Github Authentication Token Expiration
Enhancement 🥋 Low Priority 🍏
Test how this would affect the tool, and document usage for coordinator setup
1.0
Github Authentication Token Expiration - Test how this would affect the tool, and document usage for coordinator setup
non_process
github authentication token expiration test how this would affect the tool and document usage for coordinator setup
0
10,519
13,303,094,831
IssuesEvent
2020-08-25 15:04:29
GoogleCloudPlatform/dotnet-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
opened
[Firestore]: Fix and reactivate flaky tests.
api: firestore priority: p1 type: process
- FirestoreTests.GoogleCloudSamples.FirestoreTests.ListenMultipleTest - FirestoreTests.GoogleCloudSamples.FirestoreTests.ListenDocumentTest Build result [here](https://source.cloud.google.com/results/invocations/2d685bfb-f67c-48aa-9bb2-927b6f120d6f/targets/github%2Fdotnet-docs-samples%2Ffirestore%2Fapi%2FFirestoreTest%2FTestResults/tests;group=GoogleCloudSamples.FirestoreTests;test=GoogleCloudSamples.FirestoreTests.ListenDocumentTest;row=2)
1.0
[Firestore]: Fix and reactivate flaky tests. - - FirestoreTests.GoogleCloudSamples.FirestoreTests.ListenMultipleTest - FirestoreTests.GoogleCloudSamples.FirestoreTests.ListenDocumentTest Build result [here](https://source.cloud.google.com/results/invocations/2d685bfb-f67c-48aa-9bb2-927b6f120d6f/targets/github%2Fdotnet-docs-samples%2Ffirestore%2Fapi%2FFirestoreTest%2FTestResults/tests;group=GoogleCloudSamples.FirestoreTests;test=GoogleCloudSamples.FirestoreTests.ListenDocumentTest;row=2)
process
fix and reactivate flaky tests firestoretests googlecloudsamples firestoretests listenmultipletest firestoretests googlecloudsamples firestoretests listendocumenttest build result
1
3,723
6,732,907,156
IssuesEvent
2017-10-18 13:15:33
lockedata/rcms
https://api.github.com/repos/lockedata/rcms
opened
Manage registration
attendee osem processes
## Detailed task - Edit your information - Get a refund on your ticker ## Assessing the task Try to perform the task. Use google and the system documentation to help - part of what we're trying to assess how easy it is for people to work out how to do tasks. Use a 👍 (`:+1:`) reaction to this task if you were able to perform the task. Use a 👎 (`:-1:`) reaction to the task if you could not complete it. Add a reply with any comments or feedback. ## Extra Info - Site: [osem](https://intense-shore-93790.herokuapp.com/) - System documentation: [osem docs](http://osem.io/) - Role: Attendee - Area: Processes
1.0
Manage registration - ## Detailed task - Edit your information - Get a refund on your ticker ## Assessing the task Try to perform the task. Use google and the system documentation to help - part of what we're trying to assess how easy it is for people to work out how to do tasks. Use a 👍 (`:+1:`) reaction to this task if you were able to perform the task. Use a 👎 (`:-1:`) reaction to the task if you could not complete it. Add a reply with any comments or feedback. ## Extra Info - Site: [osem](https://intense-shore-93790.herokuapp.com/) - System documentation: [osem docs](http://osem.io/) - Role: Attendee - Area: Processes
process
manage registration detailed task edit your information get a refund on your ticker assessing the task try to perform the task use google and the system documentation to help part of what we re trying to assess how easy it is for people to work out how to do tasks use a 👍 reaction to this task if you were able to perform the task use a 👎 reaction to the task if you could not complete it add a reply with any comments or feedback extra info site system documentation role attendee area processes
1
20,075
26,570,078,340
IssuesEvent
2023-01-21 02:52:05
bitfocus/companion-module-requests
https://api.github.com/repos/bitfocus/companion-module-requests
closed
Server Window Won't Launch/Open
NOT YET PROCESSED
I have updated to the newest stable version of Companion but when I click on it the Server Window will not open. TBH it never has and am not sure why
1.0
Server Window Won't Launch/Open - I have updated to the newest stable version of Companion but when I click on it the Server Window will not open. TBH it never has and am not sure why
process
server window won t launch open i have updated to the newest stable version of companion but when i click on it the server window will not open tbh it never has and am not sure why
1
1,002
2,594,430,658
IssuesEvent
2015-02-20 03:17:52
BALL-Project/ball
https://api.github.com/repos/BALL-Project/ball
opened
Berechnung der Bindungsordnung nicht korrekt 2
C: VIEW P: minor T: defect
**Reported by mkonietzko on 7 Jan 42142795 17:23 UTC** Bei Berechnung der Bindungsordnung wird zunchst ein Molekl vorgeschlagen. Klickt man nun auf "Nchste Lsung berechnen", so werden zwar richtige Molekle vorgeschlagen und auf das aktuelle Molekl angewendet, aber die Wasserstoffatome werden nicht korrigiert. Auch wenn man "als neu hinzufgen" anklickt, wird ein falsche Molekl erzeugt. Beispiel Ethan. Nchste Lsung Ethen, allerdings mit 6 Wasserstoffatomen. Weiterhin kann das neu erzeugte Molekl nicht getrennt vom ursprnglichen verschoben und betrachtet werden. Beide Molekle liegen deckungsgleich bereinander.
1.0
Berechnung der Bindungsordnung nicht korrekt 2 - **Reported by mkonietzko on 7 Jan 42142795 17:23 UTC** Bei Berechnung der Bindungsordnung wird zunchst ein Molekl vorgeschlagen. Klickt man nun auf "Nchste Lsung berechnen", so werden zwar richtige Molekle vorgeschlagen und auf das aktuelle Molekl angewendet, aber die Wasserstoffatome werden nicht korrigiert. Auch wenn man "als neu hinzufgen" anklickt, wird ein falsche Molekl erzeugt. Beispiel Ethan. Nchste Lsung Ethen, allerdings mit 6 Wasserstoffatomen. Weiterhin kann das neu erzeugte Molekl nicht getrennt vom ursprnglichen verschoben und betrachtet werden. Beide Molekle liegen deckungsgleich bereinander.
non_process
berechnung der bindungsordnung nicht korrekt reported by mkonietzko on jan utc bei berechnung der bindungsordnung wird zunchst ein molekl vorgeschlagen klickt man nun auf nchste lsung berechnen so werden zwar richtige molekle vorgeschlagen und auf das aktuelle molekl angewendet aber die wasserstoffatome werden nicht korrigiert auch wenn man als neu hinzufgen anklickt wird ein falsche molekl erzeugt beispiel ethan nchste lsung ethen allerdings mit wasserstoffatomen weiterhin kann das neu erzeugte molekl nicht getrennt vom ursprnglichen verschoben und betrachtet werden beide molekle liegen deckungsgleich bereinander
0
710,405
24,416,906,078
IssuesEvent
2022-10-05 16:41:13
ufs-community/regional_workflow
https://api.github.com/repos/ufs-community/regional_workflow
closed
Add the RRFS DA and cycling components to the develop branch of regional_workflow
enhancement medium priority
## Description Add @hu5970's RRFS-based DA and cycling additions to the develop branch of regional_workflow. ## Solution Once the authoritative RRFS branch has been created in regional_workflow, Incrementally introduce PRs into the develop branch to implement both DA and cycling functionality. Individual component PRs will be necessary to manage the transition of functionality in a manageable fashion.
1.0
Add the RRFS DA and cycling components to the develop branch of regional_workflow - ## Description Add @hu5970's RRFS-based DA and cycling additions to the develop branch of regional_workflow. ## Solution Once the authoritative RRFS branch has been created in regional_workflow, Incrementally introduce PRs into the develop branch to implement both DA and cycling functionality. Individual component PRs will be necessary to manage the transition of functionality in a manageable fashion.
non_process
add the rrfs da and cycling components to the develop branch of regional workflow description add s rrfs based da and cycling additions to the develop branch of regional workflow solution once the authoritative rrfs branch has been created in regional workflow incrementally introduce prs into the develop branch to implement both da and cycling functionality individual component prs will be necessary to manage the transition of functionality in a manageable fashion
0
78,873
9,807,612,290
IssuesEvent
2019-06-12 14:03:24
djgroen/FabSim3
https://api.github.com/repos/djgroen/FabSim3
closed
FabSim / VECMAtk logo
design-only
Guys... I think it is time we think of designing one. Suggestions are welcome for logos of either type (I think we should have both ;)).
1.0
FabSim / VECMAtk logo - Guys... I think it is time we think of designing one. Suggestions are welcome for logos of either type (I think we should have both ;)).
non_process
fabsim vecmatk logo guys i think it is time we think of designing one suggestions are welcome for logos of either type i think we should have both
0
9,380
12,378,912,211
IssuesEvent
2020-05-19 11:33:39
prisma/prisma-client-js
https://api.github.com/repos/prisma/prisma-client-js
opened
Datasource override from client constructor doesn't match the datasource block from the schema.prisma file
kind/improvement process/candidate topic: dx
## Problem prisma.schema ```prisma datasource myawesomedb { provider = "mysql" url = "mysql://......" } ``` The shape of the input in the constructor is not the same as the datasource block in the schema: ```ts const client = new PrismaClient({ datasources: { myawesomedb: "mysql://........", }, }) ``` ## Suggested solution ```ts const client = new PrismaClient({ datasources: { myawesomedb: { provider: "mysql", // optional url: "mysql://........", } }, }) ``` Todo - [ ] Modify constructor - [ ] Update tests - [ ] Update docs
1.0
Datasource override from client constructor doesn't match the datasource block from the schema.prisma file - ## Problem prisma.schema ```prisma datasource myawesomedb { provider = "mysql" url = "mysql://......" } ``` The shape of the input in the constructor is not the same as the datasource block in the schema: ```ts const client = new PrismaClient({ datasources: { myawesomedb: "mysql://........", }, }) ``` ## Suggested solution ```ts const client = new PrismaClient({ datasources: { myawesomedb: { provider: "mysql", // optional url: "mysql://........", } }, }) ``` Todo - [ ] Modify constructor - [ ] Update tests - [ ] Update docs
process
datasource override from client constructor doesn t match the datasource block from the schema prisma file problem prisma schema prisma datasource myawesomedb provider mysql url mysql the shape of the input in the constructor is not the same as the datasource block in the schema ts const client new prismaclient datasources myawesomedb mysql suggested solution ts const client new prismaclient datasources myawesomedb provider mysql optional url mysql todo modify constructor update tests update docs
1
25,324
2,679,331,255
IssuesEvent
2015-03-26 16:09:58
BladeRunnerJS/brjs
https://api.github.com/repos/BladeRunnerJS/brjs
closed
McAffee virus scanner makes the app take 20+ mins to load
bug CaplinSupport high-priority
In the process of upgrading to CT4 / Motif 2.1.1. When running ./brjs serve and navigating to localhost:7070/dashboard it never loads. After around 25mins the browser will redirect to localhost:7070/dashboard/en but the dashboard still does not load. Navigating directly to /dashboard/en also has the some result. CPU profile screenshot from VisualVM is attached below. Task manager shows that the java process is using around 25% CPU for the whole time ./brjs serve is running and is the only process using a significant amount of CPU. Running ./brjs serve --debug shows that brjs looks to be constantly trying to resolve dependencies. BRJS version: 0.14.3.0 CT version: 4.0.3 Java version: 1.8.0_40 64bit JAVA_OPTS: -Xms1024m -Xmx4096m OS: Windows 7 Enterprise 64bit ![cpu profile](https://cloud.githubusercontent.com/assets/2067076/6691540/1f3ef6d4-ccbf-11e4-88b5-b5cce78fb805.png) I have also tried downloading the latest version of BRJS from the website (0.15.1.0) and have the same issue though it has a different CPU profile: ![cpu profile 15](https://cloud.githubusercontent.com/assets/2067076/6692076/5dea7874-ccc2-11e4-98be-b28f8ef25487.png)
1.0
McAffee virus scanner makes the app take 20+ mins to load - In the process of upgrading to CT4 / Motif 2.1.1. When running ./brjs serve and navigating to localhost:7070/dashboard it never loads. After around 25mins the browser will redirect to localhost:7070/dashboard/en but the dashboard still does not load. Navigating directly to /dashboard/en also has the some result. CPU profile screenshot from VisualVM is attached below. Task manager shows that the java process is using around 25% CPU for the whole time ./brjs serve is running and is the only process using a significant amount of CPU. Running ./brjs serve --debug shows that brjs looks to be constantly trying to resolve dependencies. BRJS version: 0.14.3.0 CT version: 4.0.3 Java version: 1.8.0_40 64bit JAVA_OPTS: -Xms1024m -Xmx4096m OS: Windows 7 Enterprise 64bit ![cpu profile](https://cloud.githubusercontent.com/assets/2067076/6691540/1f3ef6d4-ccbf-11e4-88b5-b5cce78fb805.png) I have also tried downloading the latest version of BRJS from the website (0.15.1.0) and have the same issue though it has a different CPU profile: ![cpu profile 15](https://cloud.githubusercontent.com/assets/2067076/6692076/5dea7874-ccc2-11e4-98be-b28f8ef25487.png)
non_process
mcaffee virus scanner makes the app take mins to load in the process of upgrading to motif when running brjs serve and navigating to localhost dashboard it never loads after around the browser will redirect to localhost dashboard en but the dashboard still does not load navigating directly to dashboard en also has the some result cpu profile screenshot from visualvm is attached below task manager shows that the java process is using around cpu for the whole time brjs serve is running and is the only process using a significant amount of cpu running brjs serve debug shows that brjs looks to be constantly trying to resolve dependencies brjs version ct version java version java opts os windows enterprise i have also tried downloading the latest version of brjs from the website and have the same issue though it has a different cpu profile
0
21,635
30,052,114,419
IssuesEvent
2023-06-28 02:00:10
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Wed, 28 Jun 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Self-supervised Learning of Event-guided Video Frame Interpolation for Rolling Shutter Frames - **Authors:** Yunfan Lu, Guoqiang Liang, Lin Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.15507 - **Pdf link:** https://arxiv.org/pdf/2306.15507 - **Abstract** This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames, guided by the novel event camera data. Although events possess high temporal resolution, beneficial for video frame interpolation (VFI), a hurdle in tackling this task is the lack of paired GS frames. Another challenge is that RS frames are susceptible to distortion when capturing moving objects. To this end, we propose a novel self-supervised framework that leverages events to guide RS frame correction and VFI in a unified framework. Our key idea is to estimate the displacement field (DF) non-linear dense 3D spatiotemporal information of all pixels during the exposure time, allowing for the reciprocal reconstruction between RS and GS frames as well as arbitrary frame rate VFI. Specifically, the displacement field estimation (DFE) module is proposed to estimate the spatiotemporal motion from events to correct the RS distortion and interpolate the GS frames in one step. We then combine the input RS frames and DF to learn a mapping for RS-to-GS frame interpolation. However, as the mapping is highly under-constrained, we couple it with an inverse mapping (i.e., GS-to-RS) and RS frame warping (i.e., RS-to-RS) for self-supervision. As there is a lack of labeled datasets for evaluation, we generate two synthetic datasets and collect a real-world dataset to train and test our method. Experimental results show that our method yields comparable or better performance with prior supervised methods. ### Physion++: Evaluating Physical Scene Understanding that Requires Online Inference of Different Physical Properties - **Authors:** Hsiao-Yu Tung, Mingyu Ding, Zhenfang Chen, Daniel Bear, Chuang Gan, Joshua B. Tenenbaum, Daniel LK Yamins, Judith E Fan, Kevin A. Smith - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Graphics (cs.GR); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.15668 - **Pdf link:** https://arxiv.org/pdf/2306.15668 - **Abstract** General physical scene understanding requires more than simply localizing and recognizing objects -- it requires knowledge that objects can have different latent properties (e.g., mass or elasticity), and that those properties affect the outcome of physical events. While there has been great progress in physical and video prediction models in recent years, benchmarks to test their performance typically do not require an understanding that objects have individual physical properties, or at best test only those properties that are directly observable (e.g., size or color). This work proposes a novel dataset and benchmark, termed Physion++, that rigorously evaluates visual physical prediction in artificial systems under circumstances where those predictions rely on accurate estimates of the latent physical properties of objects in the scene. Specifically, we test scenarios where accurate prediction relies on estimates of properties such as mass, friction, elasticity, and deformability, and where the values of those properties can only be inferred by observing how objects move and interact with other objects or fluids. We evaluate the performance of a number of state-of-the-art prediction models that span a variety of levels of learning vs. built-in knowledge, and compare that performance to a set of human predictions. We find that models that have been trained using standard regimes and datasets do not spontaneously learn to make inferences about latent properties, but also that models that encode objectness and physical states tend to make better predictions. However, there is still a huge gap between all models and human performance, and all models' predictions correlate poorly with those made by humans, suggesting that no state-of-the-art model is learning to make physical predictions in a human-like way. Project page: https://dingmyu.github.io/physion_v2/ ## Keyword: event camera ### Self-supervised Learning of Event-guided Video Frame Interpolation for Rolling Shutter Frames - **Authors:** Yunfan Lu, Guoqiang Liang, Lin Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.15507 - **Pdf link:** https://arxiv.org/pdf/2306.15507 - **Abstract** This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames, guided by the novel event camera data. Although events possess high temporal resolution, beneficial for video frame interpolation (VFI), a hurdle in tackling this task is the lack of paired GS frames. Another challenge is that RS frames are susceptible to distortion when capturing moving objects. To this end, we propose a novel self-supervised framework that leverages events to guide RS frame correction and VFI in a unified framework. Our key idea is to estimate the displacement field (DF) non-linear dense 3D spatiotemporal information of all pixels during the exposure time, allowing for the reciprocal reconstruction between RS and GS frames as well as arbitrary frame rate VFI. Specifically, the displacement field estimation (DFE) module is proposed to estimate the spatiotemporal motion from events to correct the RS distortion and interpolate the GS frames in one step. We then combine the input RS frames and DF to learn a mapping for RS-to-GS frame interpolation. However, as the mapping is highly under-constrained, we couple it with an inverse mapping (i.e., GS-to-RS) and RS frame warping (i.e., RS-to-RS) for self-supervision. As there is a lack of labeled datasets for evaluation, we generate two synthetic datasets and collect a real-world dataset to train and test our method. Experimental results show that our method yields comparable or better performance with prior supervised methods. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Spectral Analysis of Marine Debris in Simulated and Observed Sentinel-2/MSI Images using Unsupervised Classification - **Authors:** Bianca Matos de Barros, Douglas Galimberti Barbosa, Cristiano Lima Hackmann - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.15008 - **Pdf link:** https://arxiv.org/pdf/2306.15008 - **Abstract** Marine litter poses significant threats to marine and coastal environments, with its impacts ever-growing. Remote sensing provides an advantageous supplement to traditional mitigation techniques, such as local cleaning operations and trawl net surveys, due to its capabilities for extensive coverage and frequent observation. In this study, we used Radiative Transfer Model (RTM) simulated data and data from the Multispectral Instrument (MSI) of the Sentinel-2 mission in combination with machine learning algorithms. Our aim was to study the spectral behavior of marine plastic pollution and evaluate the applicability of RTMs within this research area. The results from the exploratory analysis and unsupervised classification using the KMeans algorithm indicate that the spectral behavior of pollutants is influenced by factors such as the type of polymer and pixel coverage percentage. The findings also reveal spectral characteristics and trends of association and differentiation among elements. The applied methodology is strongly dependent on the data, and if reapplied in new, more diverse, and detailed datasets, it can potentially generate even better results. These insights can guide future research in remote sensing applications for detecting marine plastic pollution. ### Delving into Crispness: Guided Label Refinement for Crisp Edge Detection - **Authors:** Yunfan Ye, Renjiao Yi, Zhirui Gao, Zhiping Cai, Kai Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.15172 - **Pdf link:** https://arxiv.org/pdf/2306.15172 - **Abstract** Learning-based edge detection usually suffers from predicting thick edges. Through extensive quantitative study with a new edge crispness measure, we find that noisy human-labeled edges are the main cause of thick predictions. Based on this observation, we advocate that more attention should be paid on label quality than on model design to achieve crisp edge detection. To this end, we propose an effective Canny-guided refinement of human-labeled edges whose result can be used to train crisp edge detectors. Essentially, it seeks for a subset of over-detected Canny edges that best align human labels. We show that several existing edge detectors can be turned into a crisp edge detector through training on our refined edge maps. Experiments demonstrate that deep models trained with refined edges achieve significant performance boost of crispness from 17.4% to 30.6%. With the PiDiNet backbone, our method improves ODS and OIS by 12.2% and 12.6% on the Multicue dataset, respectively, without relying on non-maximal suppression. We further conduct experiments and show the superiority of our crisp edge detection for optical flow estimation and image segmentation. ### Irregular Change Detection in Sparse Bi-Temporal Point Clouds using Learned Place Recognition Descriptors and Point-to-Voxel Comparison - **Authors:** Nikolaos Stathoulopoulos, Anton Koval, George Nikolakopoulos - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.15416 - **Pdf link:** https://arxiv.org/pdf/2306.15416 - **Abstract** Change detection and irregular object extraction in 3D point clouds is a challenging task that is of high importance not only for autonomous navigation but also for updating existing digital twin models of various industrial environments. This article proposes an innovative approach for change detection in 3D point clouds using deep learned place recognition descriptors and irregular object extraction based on voxel-to-point comparison. The proposed method first aligns the bi-temporal point clouds using a map-merging algorithm in order to establish a common coordinate frame. Then, it utilizes deep learning techniques to extract robust and discriminative features from the 3D point cloud scans, which are used to detect changes between consecutive point cloud frames and therefore find the changed areas. Finally, the altered areas are sampled and compared between the two time instances to extract any obstructions that caused the area to change. The proposed method was successfully evaluated in real-world field experiments, where it was able to detect different types of changes in 3D point clouds, such as object or muck-pile addition and displacement, showcasing the effectiveness of the approach. The results of this study demonstrate important implications for various applications, including safety and security monitoring in construction sites, mapping and exploration and suggests potential future research directions in this field. ### Self-supervised Learning of Event-guided Video Frame Interpolation for Rolling Shutter Frames - **Authors:** Yunfan Lu, Guoqiang Liang, Lin Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.15507 - **Pdf link:** https://arxiv.org/pdf/2306.15507 - **Abstract** This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames, guided by the novel event camera data. Although events possess high temporal resolution, beneficial for video frame interpolation (VFI), a hurdle in tackling this task is the lack of paired GS frames. Another challenge is that RS frames are susceptible to distortion when capturing moving objects. To this end, we propose a novel self-supervised framework that leverages events to guide RS frame correction and VFI in a unified framework. Our key idea is to estimate the displacement field (DF) non-linear dense 3D spatiotemporal information of all pixels during the exposure time, allowing for the reciprocal reconstruction between RS and GS frames as well as arbitrary frame rate VFI. Specifically, the displacement field estimation (DFE) module is proposed to estimate the spatiotemporal motion from events to correct the RS distortion and interpolate the GS frames in one step. We then combine the input RS frames and DF to learn a mapping for RS-to-GS frame interpolation. However, as the mapping is highly under-constrained, we couple it with an inverse mapping (i.e., GS-to-RS) and RS frame warping (i.e., RS-to-RS) for self-supervision. As there is a lack of labeled datasets for evaluation, we generate two synthetic datasets and collect a real-world dataset to train and test our method. Experimental results show that our method yields comparable or better performance with prior supervised methods. ### Rethinking Cross-Entropy Loss for Stereo Matching Networks - **Authors:** Peng Xu, Zhiyu Xiang, Chenyu Qiao, Jingyun Fu, Xijun Zhao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.15612 - **Pdf link:** https://arxiv.org/pdf/2306.15612 - **Abstract** Despite the great success of deep learning in stereo matching, recovering accurate and clearly-contoured disparity map is still challenging. Currently, L1 loss and cross-entropy loss are the two most widely used loss functions for training the stereo matching networks. Comparing with the former, the latter can usually achieve better results thanks to its direct constraint to the the cost volume. However, how to generate reasonable ground-truth distribution for this loss function remains largely under exploited. Existing works assume uni-modal distributions around the ground-truth for all of the pixels, which ignores the fact that the edge pixels may have multi-modal distributions. In this paper, we first experimentally exhibit the importance of correct edge supervision to the overall disparity accuracy. Then a novel adaptive multi-modal cross-entropy loss which encourages the network to generate different distribution patterns for edge and non-edge pixels is proposed. We further optimize the disparity estimator in the inference stage to alleviate the bleeding and misalignment artifacts at the edge. Our method is generic and can help classic stereo matching models regain competitive performance. GANet trained by our loss ranks 1st on the KITTI 2015 and 2012 benchmarks and outperforms state-of-the-art methods by a large margin. Meanwhile, our method also exhibits superior cross-domain generalization ability and outperforms existing generalization-specialized methods on four popular real-world datasets. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### You Can Mask More For Extremely Low-Bitrate Image Compression - **Authors:** Anqi Li, Feng Li, Jiaxin Han, Huihui Bai, Runmin Cong, Chunjie Zhang, Meng Wang, Weisi Lin, Yao Zhao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.15561 - **Pdf link:** https://arxiv.org/pdf/2306.15561 - **Abstract** Learned image compression (LIC) methods have experienced significant progress during recent years. However, these methods are primarily dedicated to optimizing the rate-distortion (R-D) performance at medium and high bitrates (> 0.1 bits per pixel (bpp)), while research on extremely low bitrates is limited. Besides, existing methods fail to explicitly explore the image structure and texture components crucial for image compression, treating them equally alongside uninformative components in networks. This can cause severe perceptual quality degradation, especially under low-bitrate scenarios. In this work, inspired by the success of pre-trained masked autoencoders (MAE) in many downstream tasks, we propose to rethink its mask sampling strategy from structure and texture perspectives for high redundancy reduction and discriminative feature representation, further unleashing the potential of LIC methods. Therefore, we present a dual-adaptive masking approach (DA-Mask) that samples visible patches based on the structure and texture distributions of original images. We combine DA-Mask and pre-trained MAE in masked image modeling (MIM) as an initial compressor that abstracts informative semantic context and texture representations. Such a pipeline can well cooperate with LIC networks to achieve further secondary compression while preserving promising reconstruction quality. Consequently, we propose a simple yet effective masked compression model (MCM), the first framework that unifies MIM and LIC end-to-end for extremely low-bitrate image compression. Extensive experiments have demonstrated that our approach outperforms recent state-of-the-art methods in R-D performance, visual quality, and downstream applications, at very low bitrates. Our code is available at https://github.com/lianqi1008/MCM.git. ## Keyword: RAW ### Spectral Analysis of Marine Debris in Simulated and Observed Sentinel-2/MSI Images using Unsupervised Classification - **Authors:** Bianca Matos de Barros, Douglas Galimberti Barbosa, Cristiano Lima Hackmann - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.15008 - **Pdf link:** https://arxiv.org/pdf/2306.15008 - **Abstract** Marine litter poses significant threats to marine and coastal environments, with its impacts ever-growing. Remote sensing provides an advantageous supplement to traditional mitigation techniques, such as local cleaning operations and trawl net surveys, due to its capabilities for extensive coverage and frequent observation. In this study, we used Radiative Transfer Model (RTM) simulated data and data from the Multispectral Instrument (MSI) of the Sentinel-2 mission in combination with machine learning algorithms. Our aim was to study the spectral behavior of marine plastic pollution and evaluate the applicability of RTMs within this research area. The results from the exploratory analysis and unsupervised classification using the KMeans algorithm indicate that the spectral behavior of pollutants is influenced by factors such as the type of polymer and pixel coverage percentage. The findings also reveal spectral characteristics and trends of association and differentiation among elements. The applied methodology is strongly dependent on the data, and if reapplied in new, more diverse, and detailed datasets, it can potentially generate even better results. These insights can guide future research in remote sensing applications for detecting marine plastic pollution. ### PANet: LiDAR Panoptic Segmentation with Sparse Instance Proposal and Aggregation - **Authors:** Jianbiao Mei, Yu Yang, Mengmeng Wang, Xiaojun Hou, Laijian Li, Yong Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2306.15348 - **Pdf link:** https://arxiv.org/pdf/2306.15348 - **Abstract** Reliable LiDAR panoptic segmentation (LPS), including both semantic and instance segmentation, is vital for many robotic applications, such as autonomous driving. This work proposes a new LPS framework named PANet to eliminate the dependency on the offset branch and improve the performance on large objects, which are always over-segmented by clustering algorithms. Firstly, we propose a non-learning Sparse Instance Proposal (SIP) module with the ``sampling-shifting-grouping" scheme to directly group thing points into instances from the raw point cloud efficiently. More specifically, balanced point sampling is introduced to generate sparse seed points with more uniform point distribution over the distance range. And a shift module, termed bubble shifting, is proposed to shrink the seed points to the clustered centers. Then we utilize the connected component label algorithm to generate instance proposals. Furthermore, an instance aggregation module is devised to integrate potentially fragmented instances, improving the performance of the SIP module on large objects. Extensive experiments show that PANet achieves state-of-the-art performance among published works on the SemanticKITII validation and nuScenes validation for the panoptic segmentation task. ### Detector-Free Structure from Motion - **Authors:** Xingyi He, Jiaming Sun, Yifan Wang, Sida Peng, Qixing Huang, Hujun Bao, Xiaowei Zhou - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.15669 - **Pdf link:** https://arxiv.org/pdf/2306.15669 - **Abstract** We propose a new structure-from-motion framework to recover accurate camera poses and point clouds from unordered images. Traditional SfM systems typically rely on the successful detection of repeatable keypoints across multiple views as the first step, which is difficult for texture-poor scenes, and poor keypoint detection may break down the whole SfM system. We propose a new detector-free SfM framework to draw benefits from the recent success of detector-free matchers to avoid the early determination of keypoints, while solving the multi-view inconsistency issue of detector-free matchers. Specifically, our framework first reconstructs a coarse SfM model from quantized detector-free matches. Then, it refines the model by a novel iterative refinement pipeline, which iterates between an attention-based multi-view matching module to refine feature tracks and a geometry refinement module to improve the reconstruction accuracy. Experiments demonstrate that the proposed framework outperforms existing detector-based SfM systems on common benchmark datasets. We also collect a texture-poor SfM dataset to demonstrate the capability of our framework to reconstruct texture-poor scenes. Based on this framework, we take $\textit{first place}$ in Image Matching Challenge 2023. ## Keyword: raw image There is no result
2.0
New submissions for Wed, 28 Jun 23 - ## Keyword: events ### Self-supervised Learning of Event-guided Video Frame Interpolation for Rolling Shutter Frames - **Authors:** Yunfan Lu, Guoqiang Liang, Lin Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.15507 - **Pdf link:** https://arxiv.org/pdf/2306.15507 - **Abstract** This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames, guided by the novel event camera data. Although events possess high temporal resolution, beneficial for video frame interpolation (VFI), a hurdle in tackling this task is the lack of paired GS frames. Another challenge is that RS frames are susceptible to distortion when capturing moving objects. To this end, we propose a novel self-supervised framework that leverages events to guide RS frame correction and VFI in a unified framework. Our key idea is to estimate the displacement field (DF) non-linear dense 3D spatiotemporal information of all pixels during the exposure time, allowing for the reciprocal reconstruction between RS and GS frames as well as arbitrary frame rate VFI. Specifically, the displacement field estimation (DFE) module is proposed to estimate the spatiotemporal motion from events to correct the RS distortion and interpolate the GS frames in one step. We then combine the input RS frames and DF to learn a mapping for RS-to-GS frame interpolation. However, as the mapping is highly under-constrained, we couple it with an inverse mapping (i.e., GS-to-RS) and RS frame warping (i.e., RS-to-RS) for self-supervision. As there is a lack of labeled datasets for evaluation, we generate two synthetic datasets and collect a real-world dataset to train and test our method. Experimental results show that our method yields comparable or better performance with prior supervised methods. ### Physion++: Evaluating Physical Scene Understanding that Requires Online Inference of Different Physical Properties - **Authors:** Hsiao-Yu Tung, Mingyu Ding, Zhenfang Chen, Daniel Bear, Chuang Gan, Joshua B. Tenenbaum, Daniel LK Yamins, Judith E Fan, Kevin A. Smith - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Graphics (cs.GR); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.15668 - **Pdf link:** https://arxiv.org/pdf/2306.15668 - **Abstract** General physical scene understanding requires more than simply localizing and recognizing objects -- it requires knowledge that objects can have different latent properties (e.g., mass or elasticity), and that those properties affect the outcome of physical events. While there has been great progress in physical and video prediction models in recent years, benchmarks to test their performance typically do not require an understanding that objects have individual physical properties, or at best test only those properties that are directly observable (e.g., size or color). This work proposes a novel dataset and benchmark, termed Physion++, that rigorously evaluates visual physical prediction in artificial systems under circumstances where those predictions rely on accurate estimates of the latent physical properties of objects in the scene. Specifically, we test scenarios where accurate prediction relies on estimates of properties such as mass, friction, elasticity, and deformability, and where the values of those properties can only be inferred by observing how objects move and interact with other objects or fluids. We evaluate the performance of a number of state-of-the-art prediction models that span a variety of levels of learning vs. built-in knowledge, and compare that performance to a set of human predictions. We find that models that have been trained using standard regimes and datasets do not spontaneously learn to make inferences about latent properties, but also that models that encode objectness and physical states tend to make better predictions. However, there is still a huge gap between all models and human performance, and all models' predictions correlate poorly with those made by humans, suggesting that no state-of-the-art model is learning to make physical predictions in a human-like way. Project page: https://dingmyu.github.io/physion_v2/ ## Keyword: event camera ### Self-supervised Learning of Event-guided Video Frame Interpolation for Rolling Shutter Frames - **Authors:** Yunfan Lu, Guoqiang Liang, Lin Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.15507 - **Pdf link:** https://arxiv.org/pdf/2306.15507 - **Abstract** This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames, guided by the novel event camera data. Although events possess high temporal resolution, beneficial for video frame interpolation (VFI), a hurdle in tackling this task is the lack of paired GS frames. Another challenge is that RS frames are susceptible to distortion when capturing moving objects. To this end, we propose a novel self-supervised framework that leverages events to guide RS frame correction and VFI in a unified framework. Our key idea is to estimate the displacement field (DF) non-linear dense 3D spatiotemporal information of all pixels during the exposure time, allowing for the reciprocal reconstruction between RS and GS frames as well as arbitrary frame rate VFI. Specifically, the displacement field estimation (DFE) module is proposed to estimate the spatiotemporal motion from events to correct the RS distortion and interpolate the GS frames in one step. We then combine the input RS frames and DF to learn a mapping for RS-to-GS frame interpolation. However, as the mapping is highly under-constrained, we couple it with an inverse mapping (i.e., GS-to-RS) and RS frame warping (i.e., RS-to-RS) for self-supervision. As there is a lack of labeled datasets for evaluation, we generate two synthetic datasets and collect a real-world dataset to train and test our method. Experimental results show that our method yields comparable or better performance with prior supervised methods. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Spectral Analysis of Marine Debris in Simulated and Observed Sentinel-2/MSI Images using Unsupervised Classification - **Authors:** Bianca Matos de Barros, Douglas Galimberti Barbosa, Cristiano Lima Hackmann - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.15008 - **Pdf link:** https://arxiv.org/pdf/2306.15008 - **Abstract** Marine litter poses significant threats to marine and coastal environments, with its impacts ever-growing. Remote sensing provides an advantageous supplement to traditional mitigation techniques, such as local cleaning operations and trawl net surveys, due to its capabilities for extensive coverage and frequent observation. In this study, we used Radiative Transfer Model (RTM) simulated data and data from the Multispectral Instrument (MSI) of the Sentinel-2 mission in combination with machine learning algorithms. Our aim was to study the spectral behavior of marine plastic pollution and evaluate the applicability of RTMs within this research area. The results from the exploratory analysis and unsupervised classification using the KMeans algorithm indicate that the spectral behavior of pollutants is influenced by factors such as the type of polymer and pixel coverage percentage. The findings also reveal spectral characteristics and trends of association and differentiation among elements. The applied methodology is strongly dependent on the data, and if reapplied in new, more diverse, and detailed datasets, it can potentially generate even better results. These insights can guide future research in remote sensing applications for detecting marine plastic pollution. ### Delving into Crispness: Guided Label Refinement for Crisp Edge Detection - **Authors:** Yunfan Ye, Renjiao Yi, Zhirui Gao, Zhiping Cai, Kai Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.15172 - **Pdf link:** https://arxiv.org/pdf/2306.15172 - **Abstract** Learning-based edge detection usually suffers from predicting thick edges. Through extensive quantitative study with a new edge crispness measure, we find that noisy human-labeled edges are the main cause of thick predictions. Based on this observation, we advocate that more attention should be paid on label quality than on model design to achieve crisp edge detection. To this end, we propose an effective Canny-guided refinement of human-labeled edges whose result can be used to train crisp edge detectors. Essentially, it seeks for a subset of over-detected Canny edges that best align human labels. We show that several existing edge detectors can be turned into a crisp edge detector through training on our refined edge maps. Experiments demonstrate that deep models trained with refined edges achieve significant performance boost of crispness from 17.4% to 30.6%. With the PiDiNet backbone, our method improves ODS and OIS by 12.2% and 12.6% on the Multicue dataset, respectively, without relying on non-maximal suppression. We further conduct experiments and show the superiority of our crisp edge detection for optical flow estimation and image segmentation. ### Irregular Change Detection in Sparse Bi-Temporal Point Clouds using Learned Place Recognition Descriptors and Point-to-Voxel Comparison - **Authors:** Nikolaos Stathoulopoulos, Anton Koval, George Nikolakopoulos - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.15416 - **Pdf link:** https://arxiv.org/pdf/2306.15416 - **Abstract** Change detection and irregular object extraction in 3D point clouds is a challenging task that is of high importance not only for autonomous navigation but also for updating existing digital twin models of various industrial environments. This article proposes an innovative approach for change detection in 3D point clouds using deep learned place recognition descriptors and irregular object extraction based on voxel-to-point comparison. The proposed method first aligns the bi-temporal point clouds using a map-merging algorithm in order to establish a common coordinate frame. Then, it utilizes deep learning techniques to extract robust and discriminative features from the 3D point cloud scans, which are used to detect changes between consecutive point cloud frames and therefore find the changed areas. Finally, the altered areas are sampled and compared between the two time instances to extract any obstructions that caused the area to change. The proposed method was successfully evaluated in real-world field experiments, where it was able to detect different types of changes in 3D point clouds, such as object or muck-pile addition and displacement, showcasing the effectiveness of the approach. The results of this study demonstrate important implications for various applications, including safety and security monitoring in construction sites, mapping and exploration and suggests potential future research directions in this field. ### Self-supervised Learning of Event-guided Video Frame Interpolation for Rolling Shutter Frames - **Authors:** Yunfan Lu, Guoqiang Liang, Lin Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.15507 - **Pdf link:** https://arxiv.org/pdf/2306.15507 - **Abstract** This paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter (GS) frames from two consecutive rolling shutter (RS) frames, guided by the novel event camera data. Although events possess high temporal resolution, beneficial for video frame interpolation (VFI), a hurdle in tackling this task is the lack of paired GS frames. Another challenge is that RS frames are susceptible to distortion when capturing moving objects. To this end, we propose a novel self-supervised framework that leverages events to guide RS frame correction and VFI in a unified framework. Our key idea is to estimate the displacement field (DF) non-linear dense 3D spatiotemporal information of all pixels during the exposure time, allowing for the reciprocal reconstruction between RS and GS frames as well as arbitrary frame rate VFI. Specifically, the displacement field estimation (DFE) module is proposed to estimate the spatiotemporal motion from events to correct the RS distortion and interpolate the GS frames in one step. We then combine the input RS frames and DF to learn a mapping for RS-to-GS frame interpolation. However, as the mapping is highly under-constrained, we couple it with an inverse mapping (i.e., GS-to-RS) and RS frame warping (i.e., RS-to-RS) for self-supervision. As there is a lack of labeled datasets for evaluation, we generate two synthetic datasets and collect a real-world dataset to train and test our method. Experimental results show that our method yields comparable or better performance with prior supervised methods. ### Rethinking Cross-Entropy Loss for Stereo Matching Networks - **Authors:** Peng Xu, Zhiyu Xiang, Chenyu Qiao, Jingyun Fu, Xijun Zhao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.15612 - **Pdf link:** https://arxiv.org/pdf/2306.15612 - **Abstract** Despite the great success of deep learning in stereo matching, recovering accurate and clearly-contoured disparity map is still challenging. Currently, L1 loss and cross-entropy loss are the two most widely used loss functions for training the stereo matching networks. Comparing with the former, the latter can usually achieve better results thanks to its direct constraint to the the cost volume. However, how to generate reasonable ground-truth distribution for this loss function remains largely under exploited. Existing works assume uni-modal distributions around the ground-truth for all of the pixels, which ignores the fact that the edge pixels may have multi-modal distributions. In this paper, we first experimentally exhibit the importance of correct edge supervision to the overall disparity accuracy. Then a novel adaptive multi-modal cross-entropy loss which encourages the network to generate different distribution patterns for edge and non-edge pixels is proposed. We further optimize the disparity estimator in the inference stage to alleviate the bleeding and misalignment artifacts at the edge. Our method is generic and can help classic stereo matching models regain competitive performance. GANet trained by our loss ranks 1st on the KITTI 2015 and 2012 benchmarks and outperforms state-of-the-art methods by a large margin. Meanwhile, our method also exhibits superior cross-domain generalization ability and outperforms existing generalization-specialized methods on four popular real-world datasets. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### You Can Mask More For Extremely Low-Bitrate Image Compression - **Authors:** Anqi Li, Feng Li, Jiaxin Han, Huihui Bai, Runmin Cong, Chunjie Zhang, Meng Wang, Weisi Lin, Yao Zhao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.15561 - **Pdf link:** https://arxiv.org/pdf/2306.15561 - **Abstract** Learned image compression (LIC) methods have experienced significant progress during recent years. However, these methods are primarily dedicated to optimizing the rate-distortion (R-D) performance at medium and high bitrates (> 0.1 bits per pixel (bpp)), while research on extremely low bitrates is limited. Besides, existing methods fail to explicitly explore the image structure and texture components crucial for image compression, treating them equally alongside uninformative components in networks. This can cause severe perceptual quality degradation, especially under low-bitrate scenarios. In this work, inspired by the success of pre-trained masked autoencoders (MAE) in many downstream tasks, we propose to rethink its mask sampling strategy from structure and texture perspectives for high redundancy reduction and discriminative feature representation, further unleashing the potential of LIC methods. Therefore, we present a dual-adaptive masking approach (DA-Mask) that samples visible patches based on the structure and texture distributions of original images. We combine DA-Mask and pre-trained MAE in masked image modeling (MIM) as an initial compressor that abstracts informative semantic context and texture representations. Such a pipeline can well cooperate with LIC networks to achieve further secondary compression while preserving promising reconstruction quality. Consequently, we propose a simple yet effective masked compression model (MCM), the first framework that unifies MIM and LIC end-to-end for extremely low-bitrate image compression. Extensive experiments have demonstrated that our approach outperforms recent state-of-the-art methods in R-D performance, visual quality, and downstream applications, at very low bitrates. Our code is available at https://github.com/lianqi1008/MCM.git. ## Keyword: RAW ### Spectral Analysis of Marine Debris in Simulated and Observed Sentinel-2/MSI Images using Unsupervised Classification - **Authors:** Bianca Matos de Barros, Douglas Galimberti Barbosa, Cristiano Lima Hackmann - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2306.15008 - **Pdf link:** https://arxiv.org/pdf/2306.15008 - **Abstract** Marine litter poses significant threats to marine and coastal environments, with its impacts ever-growing. Remote sensing provides an advantageous supplement to traditional mitigation techniques, such as local cleaning operations and trawl net surveys, due to its capabilities for extensive coverage and frequent observation. In this study, we used Radiative Transfer Model (RTM) simulated data and data from the Multispectral Instrument (MSI) of the Sentinel-2 mission in combination with machine learning algorithms. Our aim was to study the spectral behavior of marine plastic pollution and evaluate the applicability of RTMs within this research area. The results from the exploratory analysis and unsupervised classification using the KMeans algorithm indicate that the spectral behavior of pollutants is influenced by factors such as the type of polymer and pixel coverage percentage. The findings also reveal spectral characteristics and trends of association and differentiation among elements. The applied methodology is strongly dependent on the data, and if reapplied in new, more diverse, and detailed datasets, it can potentially generate even better results. These insights can guide future research in remote sensing applications for detecting marine plastic pollution. ### PANet: LiDAR Panoptic Segmentation with Sparse Instance Proposal and Aggregation - **Authors:** Jianbiao Mei, Yu Yang, Mengmeng Wang, Xiaojun Hou, Laijian Li, Yong Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2306.15348 - **Pdf link:** https://arxiv.org/pdf/2306.15348 - **Abstract** Reliable LiDAR panoptic segmentation (LPS), including both semantic and instance segmentation, is vital for many robotic applications, such as autonomous driving. This work proposes a new LPS framework named PANet to eliminate the dependency on the offset branch and improve the performance on large objects, which are always over-segmented by clustering algorithms. Firstly, we propose a non-learning Sparse Instance Proposal (SIP) module with the ``sampling-shifting-grouping" scheme to directly group thing points into instances from the raw point cloud efficiently. More specifically, balanced point sampling is introduced to generate sparse seed points with more uniform point distribution over the distance range. And a shift module, termed bubble shifting, is proposed to shrink the seed points to the clustered centers. Then we utilize the connected component label algorithm to generate instance proposals. Furthermore, an instance aggregation module is devised to integrate potentially fragmented instances, improving the performance of the SIP module on large objects. Extensive experiments show that PANet achieves state-of-the-art performance among published works on the SemanticKITII validation and nuScenes validation for the panoptic segmentation task. ### Detector-Free Structure from Motion - **Authors:** Xingyi He, Jiaming Sun, Yifan Wang, Sida Peng, Qixing Huang, Hujun Bao, Xiaowei Zhou - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.15669 - **Pdf link:** https://arxiv.org/pdf/2306.15669 - **Abstract** We propose a new structure-from-motion framework to recover accurate camera poses and point clouds from unordered images. Traditional SfM systems typically rely on the successful detection of repeatable keypoints across multiple views as the first step, which is difficult for texture-poor scenes, and poor keypoint detection may break down the whole SfM system. We propose a new detector-free SfM framework to draw benefits from the recent success of detector-free matchers to avoid the early determination of keypoints, while solving the multi-view inconsistency issue of detector-free matchers. Specifically, our framework first reconstructs a coarse SfM model from quantized detector-free matches. Then, it refines the model by a novel iterative refinement pipeline, which iterates between an attention-based multi-view matching module to refine feature tracks and a geometry refinement module to improve the reconstruction accuracy. Experiments demonstrate that the proposed framework outperforms existing detector-based SfM systems on common benchmark datasets. We also collect a texture-poor SfM dataset to demonstrate the capability of our framework to reconstruct texture-poor scenes. Based on this framework, we take $\textit{first place}$ in Image Matching Challenge 2023. ## Keyword: raw image There is no result
process
new submissions for wed jun keyword events self supervised learning of event guided video frame interpolation for rolling shutter frames authors yunfan lu guoqiang liang lin wang subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract this paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter gs frames from two consecutive rolling shutter rs frames guided by the novel event camera data although events possess high temporal resolution beneficial for video frame interpolation vfi a hurdle in tackling this task is the lack of paired gs frames another challenge is that rs frames are susceptible to distortion when capturing moving objects to this end we propose a novel self supervised framework that leverages events to guide rs frame correction and vfi in a unified framework our key idea is to estimate the displacement field df non linear dense spatiotemporal information of all pixels during the exposure time allowing for the reciprocal reconstruction between rs and gs frames as well as arbitrary frame rate vfi specifically the displacement field estimation dfe module is proposed to estimate the spatiotemporal motion from events to correct the rs distortion and interpolate the gs frames in one step we then combine the input rs frames and df to learn a mapping for rs to gs frame interpolation however as the mapping is highly under constrained we couple it with an inverse mapping i e gs to rs and rs frame warping i e rs to rs for self supervision as there is a lack of labeled datasets for evaluation we generate two synthetic datasets and collect a real world dataset to train and test our method experimental results show that our method yields comparable or better performance with prior supervised methods physion evaluating physical scene understanding that requires online inference of different physical properties authors hsiao yu tung mingyu ding zhenfang chen daniel bear chuang gan joshua b tenenbaum daniel lk yamins judith e fan kevin a smith subjects computer vision and pattern recognition cs cv artificial intelligence cs ai graphics cs gr robotics cs ro arxiv link pdf link abstract general physical scene understanding requires more than simply localizing and recognizing objects it requires knowledge that objects can have different latent properties e g mass or elasticity and that those properties affect the outcome of physical events while there has been great progress in physical and video prediction models in recent years benchmarks to test their performance typically do not require an understanding that objects have individual physical properties or at best test only those properties that are directly observable e g size or color this work proposes a novel dataset and benchmark termed physion that rigorously evaluates visual physical prediction in artificial systems under circumstances where those predictions rely on accurate estimates of the latent physical properties of objects in the scene specifically we test scenarios where accurate prediction relies on estimates of properties such as mass friction elasticity and deformability and where the values of those properties can only be inferred by observing how objects move and interact with other objects or fluids we evaluate the performance of a number of state of the art prediction models that span a variety of levels of learning vs built in knowledge and compare that performance to a set of human predictions we find that models that have been trained using standard regimes and datasets do not spontaneously learn to make inferences about latent properties but also that models that encode objectness and physical states tend to make better predictions however there is still a huge gap between all models and human performance and all models predictions correlate poorly with those made by humans suggesting that no state of the art model is learning to make physical predictions in a human like way project page keyword event camera self supervised learning of event guided video frame interpolation for rolling shutter frames authors yunfan lu guoqiang liang lin wang subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract this paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter gs frames from two consecutive rolling shutter rs frames guided by the novel event camera data although events possess high temporal resolution beneficial for video frame interpolation vfi a hurdle in tackling this task is the lack of paired gs frames another challenge is that rs frames are susceptible to distortion when capturing moving objects to this end we propose a novel self supervised framework that leverages events to guide rs frame correction and vfi in a unified framework our key idea is to estimate the displacement field df non linear dense spatiotemporal information of all pixels during the exposure time allowing for the reciprocal reconstruction between rs and gs frames as well as arbitrary frame rate vfi specifically the displacement field estimation dfe module is proposed to estimate the spatiotemporal motion from events to correct the rs distortion and interpolate the gs frames in one step we then combine the input rs frames and df to learn a mapping for rs to gs frame interpolation however as the mapping is highly under constrained we couple it with an inverse mapping i e gs to rs and rs frame warping i e rs to rs for self supervision as there is a lack of labeled datasets for evaluation we generate two synthetic datasets and collect a real world dataset to train and test our method experimental results show that our method yields comparable or better performance with prior supervised methods keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp spectral analysis of marine debris in simulated and observed sentinel msi images using unsupervised classification authors bianca matos de barros douglas galimberti barbosa cristiano lima hackmann subjects computer vision and pattern recognition cs cv machine learning cs lg image and video processing eess iv arxiv link pdf link abstract marine litter poses significant threats to marine and coastal environments with its impacts ever growing remote sensing provides an advantageous supplement to traditional mitigation techniques such as local cleaning operations and trawl net surveys due to its capabilities for extensive coverage and frequent observation in this study we used radiative transfer model rtm simulated data and data from the multispectral instrument msi of the sentinel mission in combination with machine learning algorithms our aim was to study the spectral behavior of marine plastic pollution and evaluate the applicability of rtms within this research area the results from the exploratory analysis and unsupervised classification using the kmeans algorithm indicate that the spectral behavior of pollutants is influenced by factors such as the type of polymer and pixel coverage percentage the findings also reveal spectral characteristics and trends of association and differentiation among elements the applied methodology is strongly dependent on the data and if reapplied in new more diverse and detailed datasets it can potentially generate even better results these insights can guide future research in remote sensing applications for detecting marine plastic pollution delving into crispness guided label refinement for crisp edge detection authors yunfan ye renjiao yi zhirui gao zhiping cai kai xu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract learning based edge detection usually suffers from predicting thick edges through extensive quantitative study with a new edge crispness measure we find that noisy human labeled edges are the main cause of thick predictions based on this observation we advocate that more attention should be paid on label quality than on model design to achieve crisp edge detection to this end we propose an effective canny guided refinement of human labeled edges whose result can be used to train crisp edge detectors essentially it seeks for a subset of over detected canny edges that best align human labels we show that several existing edge detectors can be turned into a crisp edge detector through training on our refined edge maps experiments demonstrate that deep models trained with refined edges achieve significant performance boost of crispness from to with the pidinet backbone our method improves ods and ois by and on the multicue dataset respectively without relying on non maximal suppression we further conduct experiments and show the superiority of our crisp edge detection for optical flow estimation and image segmentation irregular change detection in sparse bi temporal point clouds using learned place recognition descriptors and point to voxel comparison authors nikolaos stathoulopoulos anton koval george nikolakopoulos subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract change detection and irregular object extraction in point clouds is a challenging task that is of high importance not only for autonomous navigation but also for updating existing digital twin models of various industrial environments this article proposes an innovative approach for change detection in point clouds using deep learned place recognition descriptors and irregular object extraction based on voxel to point comparison the proposed method first aligns the bi temporal point clouds using a map merging algorithm in order to establish a common coordinate frame then it utilizes deep learning techniques to extract robust and discriminative features from the point cloud scans which are used to detect changes between consecutive point cloud frames and therefore find the changed areas finally the altered areas are sampled and compared between the two time instances to extract any obstructions that caused the area to change the proposed method was successfully evaluated in real world field experiments where it was able to detect different types of changes in point clouds such as object or muck pile addition and displacement showcasing the effectiveness of the approach the results of this study demonstrate important implications for various applications including safety and security monitoring in construction sites mapping and exploration and suggests potential future research directions in this field self supervised learning of event guided video frame interpolation for rolling shutter frames authors yunfan lu guoqiang liang lin wang subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract this paper makes the first attempt to tackle the challenging task of recovering arbitrary frame rate latent global shutter gs frames from two consecutive rolling shutter rs frames guided by the novel event camera data although events possess high temporal resolution beneficial for video frame interpolation vfi a hurdle in tackling this task is the lack of paired gs frames another challenge is that rs frames are susceptible to distortion when capturing moving objects to this end we propose a novel self supervised framework that leverages events to guide rs frame correction and vfi in a unified framework our key idea is to estimate the displacement field df non linear dense spatiotemporal information of all pixels during the exposure time allowing for the reciprocal reconstruction between rs and gs frames as well as arbitrary frame rate vfi specifically the displacement field estimation dfe module is proposed to estimate the spatiotemporal motion from events to correct the rs distortion and interpolate the gs frames in one step we then combine the input rs frames and df to learn a mapping for rs to gs frame interpolation however as the mapping is highly under constrained we couple it with an inverse mapping i e gs to rs and rs frame warping i e rs to rs for self supervision as there is a lack of labeled datasets for evaluation we generate two synthetic datasets and collect a real world dataset to train and test our method experimental results show that our method yields comparable or better performance with prior supervised methods rethinking cross entropy loss for stereo matching networks authors peng xu zhiyu xiang chenyu qiao jingyun fu xijun zhao subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract despite the great success of deep learning in stereo matching recovering accurate and clearly contoured disparity map is still challenging currently loss and cross entropy loss are the two most widely used loss functions for training the stereo matching networks comparing with the former the latter can usually achieve better results thanks to its direct constraint to the the cost volume however how to generate reasonable ground truth distribution for this loss function remains largely under exploited existing works assume uni modal distributions around the ground truth for all of the pixels which ignores the fact that the edge pixels may have multi modal distributions in this paper we first experimentally exhibit the importance of correct edge supervision to the overall disparity accuracy then a novel adaptive multi modal cross entropy loss which encourages the network to generate different distribution patterns for edge and non edge pixels is proposed we further optimize the disparity estimator in the inference stage to alleviate the bleeding and misalignment artifacts at the edge our method is generic and can help classic stereo matching models regain competitive performance ganet trained by our loss ranks on the kitti and benchmarks and outperforms state of the art methods by a large margin meanwhile our method also exhibits superior cross domain generalization ability and outperforms existing generalization specialized methods on four popular real world datasets keyword image signal processing there is no result keyword image signal process there is no result keyword compression you can mask more for extremely low bitrate image compression authors anqi li feng li jiaxin han huihui bai runmin cong chunjie zhang meng wang weisi lin yao zhao subjects computer vision and pattern recognition cs cv multimedia cs mm image and video processing eess iv arxiv link pdf link abstract learned image compression lic methods have experienced significant progress during recent years however these methods are primarily dedicated to optimizing the rate distortion r d performance at medium and high bitrates bits per pixel bpp while research on extremely low bitrates is limited besides existing methods fail to explicitly explore the image structure and texture components crucial for image compression treating them equally alongside uninformative components in networks this can cause severe perceptual quality degradation especially under low bitrate scenarios in this work inspired by the success of pre trained masked autoencoders mae in many downstream tasks we propose to rethink its mask sampling strategy from structure and texture perspectives for high redundancy reduction and discriminative feature representation further unleashing the potential of lic methods therefore we present a dual adaptive masking approach da mask that samples visible patches based on the structure and texture distributions of original images we combine da mask and pre trained mae in masked image modeling mim as an initial compressor that abstracts informative semantic context and texture representations such a pipeline can well cooperate with lic networks to achieve further secondary compression while preserving promising reconstruction quality consequently we propose a simple yet effective masked compression model mcm the first framework that unifies mim and lic end to end for extremely low bitrate image compression extensive experiments have demonstrated that our approach outperforms recent state of the art methods in r d performance visual quality and downstream applications at very low bitrates our code is available at keyword raw spectral analysis of marine debris in simulated and observed sentinel msi images using unsupervised classification authors bianca matos de barros douglas galimberti barbosa cristiano lima hackmann subjects computer vision and pattern recognition cs cv machine learning cs lg image and video processing eess iv arxiv link pdf link abstract marine litter poses significant threats to marine and coastal environments with its impacts ever growing remote sensing provides an advantageous supplement to traditional mitigation techniques such as local cleaning operations and trawl net surveys due to its capabilities for extensive coverage and frequent observation in this study we used radiative transfer model rtm simulated data and data from the multispectral instrument msi of the sentinel mission in combination with machine learning algorithms our aim was to study the spectral behavior of marine plastic pollution and evaluate the applicability of rtms within this research area the results from the exploratory analysis and unsupervised classification using the kmeans algorithm indicate that the spectral behavior of pollutants is influenced by factors such as the type of polymer and pixel coverage percentage the findings also reveal spectral characteristics and trends of association and differentiation among elements the applied methodology is strongly dependent on the data and if reapplied in new more diverse and detailed datasets it can potentially generate even better results these insights can guide future research in remote sensing applications for detecting marine plastic pollution panet lidar panoptic segmentation with sparse instance proposal and aggregation authors jianbiao mei yu yang mengmeng wang xiaojun hou laijian li yong liu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract reliable lidar panoptic segmentation lps including both semantic and instance segmentation is vital for many robotic applications such as autonomous driving this work proposes a new lps framework named panet to eliminate the dependency on the offset branch and improve the performance on large objects which are always over segmented by clustering algorithms firstly we propose a non learning sparse instance proposal sip module with the sampling shifting grouping scheme to directly group thing points into instances from the raw point cloud efficiently more specifically balanced point sampling is introduced to generate sparse seed points with more uniform point distribution over the distance range and a shift module termed bubble shifting is proposed to shrink the seed points to the clustered centers then we utilize the connected component label algorithm to generate instance proposals furthermore an instance aggregation module is devised to integrate potentially fragmented instances improving the performance of the sip module on large objects extensive experiments show that panet achieves state of the art performance among published works on the semantickitii validation and nuscenes validation for the panoptic segmentation task detector free structure from motion authors xingyi he jiaming sun yifan wang sida peng qixing huang hujun bao xiaowei zhou subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we propose a new structure from motion framework to recover accurate camera poses and point clouds from unordered images traditional sfm systems typically rely on the successful detection of repeatable keypoints across multiple views as the first step which is difficult for texture poor scenes and poor keypoint detection may break down the whole sfm system we propose a new detector free sfm framework to draw benefits from the recent success of detector free matchers to avoid the early determination of keypoints while solving the multi view inconsistency issue of detector free matchers specifically our framework first reconstructs a coarse sfm model from quantized detector free matches then it refines the model by a novel iterative refinement pipeline which iterates between an attention based multi view matching module to refine feature tracks and a geometry refinement module to improve the reconstruction accuracy experiments demonstrate that the proposed framework outperforms existing detector based sfm systems on common benchmark datasets we also collect a texture poor sfm dataset to demonstrate the capability of our framework to reconstruct texture poor scenes based on this framework we take textit first place in image matching challenge keyword raw image there is no result
1
8,230
7,299,005,779
IssuesEvent
2018-02-26 18:44:05
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
Gerrit doesn't check for CLAs
P1 high area-infrastructure
On github every PR had an automatic check for CLA. The same thing should happen on Gerrit, so that we can more easily accept external patches through Gerrit.
1.0
Gerrit doesn't check for CLAs - On github every PR had an automatic check for CLA. The same thing should happen on Gerrit, so that we can more easily accept external patches through Gerrit.
non_process
gerrit doesn t check for clas on github every pr had an automatic check for cla the same thing should happen on gerrit so that we can more easily accept external patches through gerrit
0
319,920
23,795,386,886
IssuesEvent
2022-09-02 19:00:18
7-Stories-Above-Ponce/yelpApp
https://api.github.com/repos/7-Stories-Above-Ponce/yelpApp
closed
README File
documentation
As a DEVELOPER, I want to develop a README in order to provide insight to the user about the app.
1.0
README File - As a DEVELOPER, I want to develop a README in order to provide insight to the user about the app.
non_process
readme file as a developer i want to develop a readme in order to provide insight to the user about the app
0
7,858
11,033,436,132
IssuesEvent
2019-12-06 23:00:27
shirou/gopsutil
https://api.github.com/repos/shirou/gopsutil
closed
[process][darwin] Panic when calling process.CreateTime()
os:darwin package:process
Am trying to list all the processes and print the names as follows: ``` func listAllProcesses() { processes, err := process.Processes() if err != nil { fmt.Println("Could not get a list of processes!") return } for _, process := range processes { name, err := process.Cmdline() if err != nil { // There was an error retrieving the command line for this process. Continue to the next. continue } fmt.Println(name) } } ``` Am seeing panics as follows: ``` panic: runtime error: index out of range [0] with length 0 goroutine 401 [running]: github.com/shirou/gopsutil/process.(*Process).createTimeWithContext(0xc0003ed810, 0x4e43480, 0xc0000e2020, 0x0, 0x0, 0x0) /Users/amitsaurav/Snapchat/Dev/go/pkg/mod/github.com/shirou/gopsutil@v2.19.9+incompatible/process/process_darwin.go:139 +0x38c github.com/shirou/gopsutil/process.(*Process).CreateTimeWithContext(0xc0003ed810, 0x4e43480, 0xc0000e2020, 0x428c537, 0x0, 0x0) /Users/amitsaurav/Snapchat/Dev/go/pkg/mod/github.com/shirou/gopsutil@v2.19.9+incompatible/process/process.go:257 +0x66 github.com/shirou/gopsutil/process.(*Process).CreateTime(0xc0003ed810, 0x1, 0x0, 0x0) /Users/amitsaurav/Snapchat/Dev/go/pkg/mod/github.com/shirou/gopsutil@v2.19.9+incompatible/process/process.go:250 +0x43 created by github.com/shirou/gopsutil/process.NewProcess /Users/amitsaurav/Snapchat/Dev/go/pkg/mod/github.com/shirou/gopsutil@v2.19.9+incompatible/process/process.go:167 +0xa0 exit status 2 ```
1.0
[process][darwin] Panic when calling process.CreateTime() - Am trying to list all the processes and print the names as follows: ``` func listAllProcesses() { processes, err := process.Processes() if err != nil { fmt.Println("Could not get a list of processes!") return } for _, process := range processes { name, err := process.Cmdline() if err != nil { // There was an error retrieving the command line for this process. Continue to the next. continue } fmt.Println(name) } } ``` Am seeing panics as follows: ``` panic: runtime error: index out of range [0] with length 0 goroutine 401 [running]: github.com/shirou/gopsutil/process.(*Process).createTimeWithContext(0xc0003ed810, 0x4e43480, 0xc0000e2020, 0x0, 0x0, 0x0) /Users/amitsaurav/Snapchat/Dev/go/pkg/mod/github.com/shirou/gopsutil@v2.19.9+incompatible/process/process_darwin.go:139 +0x38c github.com/shirou/gopsutil/process.(*Process).CreateTimeWithContext(0xc0003ed810, 0x4e43480, 0xc0000e2020, 0x428c537, 0x0, 0x0) /Users/amitsaurav/Snapchat/Dev/go/pkg/mod/github.com/shirou/gopsutil@v2.19.9+incompatible/process/process.go:257 +0x66 github.com/shirou/gopsutil/process.(*Process).CreateTime(0xc0003ed810, 0x1, 0x0, 0x0) /Users/amitsaurav/Snapchat/Dev/go/pkg/mod/github.com/shirou/gopsutil@v2.19.9+incompatible/process/process.go:250 +0x43 created by github.com/shirou/gopsutil/process.NewProcess /Users/amitsaurav/Snapchat/Dev/go/pkg/mod/github.com/shirou/gopsutil@v2.19.9+incompatible/process/process.go:167 +0xa0 exit status 2 ```
process
panic when calling process createtime am trying to list all the processes and print the names as follows func listallprocesses processes err process processes if err nil fmt println could not get a list of processes return for process range processes name err process cmdline if err nil there was an error retrieving the command line for this process continue to the next continue fmt println name am seeing panics as follows panic runtime error index out of range with length goroutine github com shirou gopsutil process process createtimewithcontext users amitsaurav snapchat dev go pkg mod github com shirou gopsutil incompatible process process darwin go github com shirou gopsutil process process createtimewithcontext users amitsaurav snapchat dev go pkg mod github com shirou gopsutil incompatible process process go github com shirou gopsutil process process createtime users amitsaurav snapchat dev go pkg mod github com shirou gopsutil incompatible process process go created by github com shirou gopsutil process newprocess users amitsaurav snapchat dev go pkg mod github com shirou gopsutil incompatible process process go exit status
1
5,923
8,743,165,179
IssuesEvent
2018-12-12 18:21:49
googleapis/google-cloud-python
https://api.github.com/repos/googleapis/google-cloud-python
closed
[Firestore] Field Path escaping in python doesn't match other languages
api: firestore triaged for GA type: process
There are no FieldPath overloads in the SDK. Instead, there is a field_path() method that returns an encoded string. While the Python SDK expects fully escaped field paths, other SDKS escape user-provided field paths by default and split on dots for this escaping.
1.0
[Firestore] Field Path escaping in python doesn't match other languages - There are no FieldPath overloads in the SDK. Instead, there is a field_path() method that returns an encoded string. While the Python SDK expects fully escaped field paths, other SDKS escape user-provided field paths by default and split on dots for this escaping.
process
field path escaping in python doesn t match other languages there are no fieldpath overloads in the sdk instead there is a field path method that returns an encoded string while the python sdk expects fully escaped field paths other sdks escape user provided field paths by default and split on dots for this escaping
1
274,476
20,834,165,028
IssuesEvent
2022-03-19 23:25:27
SE701-T1/frontend
https://api.github.com/repos/SE701-T1/frontend
closed
Add Code Owners
Status: Available Type: Documentation Priority: Low
**Describe the task that needs to be done.** Code owners need to be added so that the right people are automatically requested for review when PRs are made in directories relevant to them. **Describe how a solution to your proposed task might look like (and any alternatives considered).** Split the `src` directory into folders by features . The frontend devs in each team will be code owners for their feature subdirectory. I see that we have already set up `components` and `pages` subdirectories, so I need some advice on how to best incorporate the feature subdirectories. Option 1: ![image](https://user-images.githubusercontent.com/68877945/157973856-70972e29-9d9f-42ea-8dfa-33f174e1605b.png) Option 2: ![image](https://user-images.githubusercontent.com/68877945/157973835-fa9f28c5-3a1b-405f-a625-e9b619268677.png) **Notes** Could I get some advice from the frontend team as to how to go about this, since I'm in backend and don't know about the plans for the frontend repo.
1.0
Add Code Owners - **Describe the task that needs to be done.** Code owners need to be added so that the right people are automatically requested for review when PRs are made in directories relevant to them. **Describe how a solution to your proposed task might look like (and any alternatives considered).** Split the `src` directory into folders by features . The frontend devs in each team will be code owners for their feature subdirectory. I see that we have already set up `components` and `pages` subdirectories, so I need some advice on how to best incorporate the feature subdirectories. Option 1: ![image](https://user-images.githubusercontent.com/68877945/157973856-70972e29-9d9f-42ea-8dfa-33f174e1605b.png) Option 2: ![image](https://user-images.githubusercontent.com/68877945/157973835-fa9f28c5-3a1b-405f-a625-e9b619268677.png) **Notes** Could I get some advice from the frontend team as to how to go about this, since I'm in backend and don't know about the plans for the frontend repo.
non_process
add code owners describe the task that needs to be done code owners need to be added so that the right people are automatically requested for review when prs are made in directories relevant to them describe how a solution to your proposed task might look like and any alternatives considered split the src directory into folders by features the frontend devs in each team will be code owners for their feature subdirectory i see that we have already set up components and pages subdirectories so i need some advice on how to best incorporate the feature subdirectories option option notes could i get some advice from the frontend team as to how to go about this since i m in backend and don t know about the plans for the frontend repo
0
344,884
24,832,992,928
IssuesEvent
2022-10-26 06:18:06
tensorchord/envd
https://api.github.com/repos/tensorchord/envd
closed
bug(doc): unclear chart when change to night mode
type/bug 🐛 type/documentation 📄
<img width="879" alt="Screen Shot 2022-10-25 at 22 23 39" src="https://user-images.githubusercontent.com/3927355/197800195-71decaa2-2b45-438d-a8ac-383d6142b56e.png">
1.0
bug(doc): unclear chart when change to night mode - <img width="879" alt="Screen Shot 2022-10-25 at 22 23 39" src="https://user-images.githubusercontent.com/3927355/197800195-71decaa2-2b45-438d-a8ac-383d6142b56e.png">
non_process
bug doc unclear chart when change to night mode img width alt screen shot at src
0
10,237
13,098,106,842
IssuesEvent
2020-08-03 18:47:20
googleapis/python-bigtable
https://api.github.com/repos/googleapis/python-bigtable
opened
systests / snippets leaking instances
testing type: process
![Screenshot from 2020-08-03 14-40-09](https://user-images.githubusercontent.com/242750/89215805-43a6bc00-d597-11ea-8705-cf5bc32c2cd8.png) | Instance ID | Creation Date | | --- | --- | | g-c-p-1591650987756 | 2020-06-08t21-16-27 | | g-c-p-1591651011152 | 2020-06-08t21-16-51 | | g-c-p-1591720949796 | 2020-06-09t16-42-29 | | g-c-p-1591721062073 | 2020-06-09t16-44-22 | | g-c-p-1594318861981 | 2020-07-09t18-21-01 | | g-c-p-d-1591651011152 | 2020-06-08t21-16-51 | | g-c-p-d-1594318861981 | 2020-07-09t18-21-01 | | snippet-tests-1596479760618 | 2020-08-03t18-36-00 | | snippet-1561580632585 | 2019-06-26t20-23-52 | | snippet-1596479773184 | 2020-08-03t18-36-13 | | snippet-tests-1588356586020 | 2020-05-01t18-09-46 | Of those, two represent what seem to be currently running CI jobs. I will delete the rest, but we need to find and plug the leak.
1.0
systests / snippets leaking instances - ![Screenshot from 2020-08-03 14-40-09](https://user-images.githubusercontent.com/242750/89215805-43a6bc00-d597-11ea-8705-cf5bc32c2cd8.png) | Instance ID | Creation Date | | --- | --- | | g-c-p-1591650987756 | 2020-06-08t21-16-27 | | g-c-p-1591651011152 | 2020-06-08t21-16-51 | | g-c-p-1591720949796 | 2020-06-09t16-42-29 | | g-c-p-1591721062073 | 2020-06-09t16-44-22 | | g-c-p-1594318861981 | 2020-07-09t18-21-01 | | g-c-p-d-1591651011152 | 2020-06-08t21-16-51 | | g-c-p-d-1594318861981 | 2020-07-09t18-21-01 | | snippet-tests-1596479760618 | 2020-08-03t18-36-00 | | snippet-1561580632585 | 2019-06-26t20-23-52 | | snippet-1596479773184 | 2020-08-03t18-36-13 | | snippet-tests-1588356586020 | 2020-05-01t18-09-46 | Of those, two represent what seem to be currently running CI jobs. I will delete the rest, but we need to find and plug the leak.
process
systests snippets leaking instances instance id creation date g c p g c p g c p g c p g c p g c p d g c p d snippet tests snippet snippet snippet tests of those two represent what seem to be currently running ci jobs i will delete the rest but we need to find and plug the leak
1
449,342
31,839,462,084
IssuesEvent
2023-09-14 15:22:32
CodeSystem2022/ERROR-404-Trabajo-Final
https://api.github.com/repos/CodeSystem2022/ERROR-404-Trabajo-Final
opened
-Realizar las pruebas y documentarlas
documentation enhancement
- Como tester del proyecto deberá realizar la documentación de de las pruebas, documentarlas y pasar el link de dicho documento a Sonia para que lo pueda agregar en el README.
1.0
-Realizar las pruebas y documentarlas - - Como tester del proyecto deberá realizar la documentación de de las pruebas, documentarlas y pasar el link de dicho documento a Sonia para que lo pueda agregar en el README.
non_process
realizar las pruebas y documentarlas como tester del proyecto deberá realizar la documentación de de las pruebas documentarlas y pasar el link de dicho documento a sonia para que lo pueda agregar en el readme
0
38,943
10,268,641,211
IssuesEvent
2019-08-23 06:58:53
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
reopened
error: 'is_final' is not a member of 'std'
subtype:bazel type:build/install
With bazel 0.26.1 gcc 7.4 cuda 10.1 update 2 and the latest git clone of tensorflow, I hit the following error at the bazel build command ``` ERROR: /home/mh.naderan/.cache/bazel/_bazel_mh.naderan/dacf7a124fc721f30ac789c201b3b139/external/llvm/BUILD.bazel:201:1: C++ compilation of rule '@llvm//:llvm-tblgen' failed (Exit 1) In file included from external/llvm/include/llvm/TableGen/Record.h:27:0, from external/llvm/utils/TableGen/SubtargetFeatureInfo.h:13, from external/llvm/utils/TableGen/SubtargetFeatureInfo.cpp:9: external/llvm/include/llvm/Support/TrailingObjects.h: In static member function 'static void llvm::TrailingObjects<BaseTy, TrailingTys>::verifyTrailingObjectsAssertions()': external/llvm/include/llvm/Support/TrailingObjects.h:252:24: error: 'is_final' is not a member of 'std' static_assert(std::is_final<BaseTy>(), "BaseTy must be final."); ^~~~~~~~ external/llvm/include/llvm/Support/TrailingObjects.h:252:24: note: suggested alternative: 'is_heap' static_assert(std::is_final<BaseTy>(), "BaseTy must be final."); ^~~~~~~~ is_heap external/llvm/include/llvm/Support/TrailingObjects.h:252:39: error: expected primary-expression before '>' token static_assert(std::is_final<BaseTy>(), "BaseTy must be final."); ^ external/llvm/include/llvm/Support/TrailingObjects.h:252:41: error: expected primary-expression before ')' token static_assert(std::is_final<BaseTy>(), "BaseTy must be final."); ^ Target //tensorflow/tools/pip_package:build_pip_package failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 442.171s, Critical Path: 25.05s INFO: 1254 processes: 1254 local. FAILED: Build did NOT complete successfully ```
1.0
error: 'is_final' is not a member of 'std' - With bazel 0.26.1 gcc 7.4 cuda 10.1 update 2 and the latest git clone of tensorflow, I hit the following error at the bazel build command ``` ERROR: /home/mh.naderan/.cache/bazel/_bazel_mh.naderan/dacf7a124fc721f30ac789c201b3b139/external/llvm/BUILD.bazel:201:1: C++ compilation of rule '@llvm//:llvm-tblgen' failed (Exit 1) In file included from external/llvm/include/llvm/TableGen/Record.h:27:0, from external/llvm/utils/TableGen/SubtargetFeatureInfo.h:13, from external/llvm/utils/TableGen/SubtargetFeatureInfo.cpp:9: external/llvm/include/llvm/Support/TrailingObjects.h: In static member function 'static void llvm::TrailingObjects<BaseTy, TrailingTys>::verifyTrailingObjectsAssertions()': external/llvm/include/llvm/Support/TrailingObjects.h:252:24: error: 'is_final' is not a member of 'std' static_assert(std::is_final<BaseTy>(), "BaseTy must be final."); ^~~~~~~~ external/llvm/include/llvm/Support/TrailingObjects.h:252:24: note: suggested alternative: 'is_heap' static_assert(std::is_final<BaseTy>(), "BaseTy must be final."); ^~~~~~~~ is_heap external/llvm/include/llvm/Support/TrailingObjects.h:252:39: error: expected primary-expression before '>' token static_assert(std::is_final<BaseTy>(), "BaseTy must be final."); ^ external/llvm/include/llvm/Support/TrailingObjects.h:252:41: error: expected primary-expression before ')' token static_assert(std::is_final<BaseTy>(), "BaseTy must be final."); ^ Target //tensorflow/tools/pip_package:build_pip_package failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 442.171s, Critical Path: 25.05s INFO: 1254 processes: 1254 local. FAILED: Build did NOT complete successfully ```
non_process
error is final is not a member of std with bazel gcc cuda update and the latest git clone of tensorflow i hit the following error at the bazel build command error home mh naderan cache bazel bazel mh naderan external llvm build bazel c compilation of rule llvm llvm tblgen failed exit in file included from external llvm include llvm tablegen record h from external llvm utils tablegen subtargetfeatureinfo h from external llvm utils tablegen subtargetfeatureinfo cpp external llvm include llvm support trailingobjects h in static member function static void llvm trailingobjects verifytrailingobjectsassertions external llvm include llvm support trailingobjects h error is final is not a member of std static assert std is final basety must be final external llvm include llvm support trailingobjects h note suggested alternative is heap static assert std is final basety must be final is heap external llvm include llvm support trailingobjects h error expected primary expression before token static assert std is final basety must be final external llvm include llvm support trailingobjects h error expected primary expression before token static assert std is final basety must be final target tensorflow tools pip package build pip package failed to build use verbose failures to see the command lines of failed build steps info elapsed time critical path info processes local failed build did not complete successfully
0
32,651
13,903,557,148
IssuesEvent
2020-10-20 07:25:29
elastic/beats
https://api.github.com/repos/elastic/beats
closed
System module - Strange behavior when disabled metricsets will still collect metrics
Team:Services bug v7.10.0
Events from other metricsets coming in although they have been disabled. - when I run the cpu metricset even though I disable process and network in the system.yml config file metricset they are still running. Example config: ``` - module: system period: 10s metricsets: - cpu #- load #- memory #- network #- process #- process_summary #- socket_summary #- entropy #- core #- diskio #- socket #- service #- users process.include_top_n: by_cpu: 5 # include top 5 processes by CPU by_memory: 5 # include top 5 processes by memory - module: system period: 1m metricsets: #- filesystem #- fsstat processors: - drop_event.when.regexp: system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)' - module: system period: 15m metricsets: - uptime #- module: system # period: 5m # metricsets: # - raid # raid.mount_point: '/' ``` - same I run the memory metricset then process_summary and socket_summary events are also coming in, this behavior happens with the others also. Windows 10, 64 bit
1.0
System module - Strange behavior when disabled metricsets will still collect metrics - Events from other metricsets coming in although they have been disabled. - when I run the cpu metricset even though I disable process and network in the system.yml config file metricset they are still running. Example config: ``` - module: system period: 10s metricsets: - cpu #- load #- memory #- network #- process #- process_summary #- socket_summary #- entropy #- core #- diskio #- socket #- service #- users process.include_top_n: by_cpu: 5 # include top 5 processes by CPU by_memory: 5 # include top 5 processes by memory - module: system period: 1m metricsets: #- filesystem #- fsstat processors: - drop_event.when.regexp: system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)' - module: system period: 15m metricsets: - uptime #- module: system # period: 5m # metricsets: # - raid # raid.mount_point: '/' ``` - same I run the memory metricset then process_summary and socket_summary events are also coming in, this behavior happens with the others also. Windows 10, 64 bit
non_process
system module strange behavior when disabled metricsets will still collect metrics events from other metricsets coming in although they have been disabled when i run the cpu metricset even though i disable process and network in the system yml config file metricset they are still running example config module system period metricsets cpu load memory network process process summary socket summary entropy core diskio socket service users process include top n by cpu include top processes by cpu by memory include top processes by memory module system period metricsets filesystem fsstat processors drop event when regexp system filesystem mount point sys cgroup proc dev etc host lib snap module system period metricsets uptime module system period metricsets raid raid mount point same i run the memory metricset then process summary and socket summary events are also coming in this behavior happens with the others also windows bit
0
9,220
12,256,874,814
IssuesEvent
2020-05-06 12:50:59
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Output in Process{} block is not shown - By design, or undocumented "feature"?
Pri2 automation/svc cxp process-automation/subsvc product-question triaged
Using a ```Process{}``` block inside a Automation Account runbook does not give any output to terminal/ output streams. ```Begin{}``` and ```End{}``` shows output though. Sample code: ```powershell [OutputType($null)] Param () Begin { Write-Output -InputObject 'Begin -&gt; Write-Output' Write-Verbose -Message 'Begin -&gt; Write-Verbose' -Verbose } Process { Write-Output -InputObject 'Process -&gt; Write-Output' Write-Verbose -Message 'Process -&gt; Write-Verbose' -Verbose } End { Write-Output -InputObject 'End -&gt; Write-Output' Write-Verbose -Message 'End -&gt; Write-Verbose' -Verbose } ``` Output in PowerShell 5.1, Windows 10 ```powershell Begin -&gt; Write-Output VERBOSE: Begin -&gt; Write-Verbose Process -&gt; Write-Output VERBOSE: Process -&gt; Write-Verbose End -&gt; Write-Output VERBOSE: End -&gt; Write-Verbose ``` Output in Automation Account as of 2020-04-29 ```powershell Begin -&gt; Write-Output Begin -&gt; Write-Verbose End -&gt; Write-Output End -&gt; Write-Verbose ``` Why is that? * If this is a known limitation/ by design, please update this documentation to say so. * If this is a bug, please send it to the correct team. * UserVoice is not appropriate because this is a bug, not a feature request. * Azure Tickets are way to time consuming and usually does not lead to anything. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: cc677cf7-8d87-d8f9-a333-b3c6b9781737 * Version Independent ID: 63a960ad-4046-40ec-5589-9e682fea8b22 * Content: [Runbook output and messages in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-runbook-output-and-messages) * Content Source: [articles/automation/automation-runbook-output-and-messages.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-runbook-output-and-messages.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**
1.0
Output in Process{} block is not shown - By design, or undocumented "feature"? - Using a ```Process{}``` block inside a Automation Account runbook does not give any output to terminal/ output streams. ```Begin{}``` and ```End{}``` shows output though. Sample code: ```powershell [OutputType($null)] Param () Begin { Write-Output -InputObject 'Begin -&gt; Write-Output' Write-Verbose -Message 'Begin -&gt; Write-Verbose' -Verbose } Process { Write-Output -InputObject 'Process -&gt; Write-Output' Write-Verbose -Message 'Process -&gt; Write-Verbose' -Verbose } End { Write-Output -InputObject 'End -&gt; Write-Output' Write-Verbose -Message 'End -&gt; Write-Verbose' -Verbose } ``` Output in PowerShell 5.1, Windows 10 ```powershell Begin -&gt; Write-Output VERBOSE: Begin -&gt; Write-Verbose Process -&gt; Write-Output VERBOSE: Process -&gt; Write-Verbose End -&gt; Write-Output VERBOSE: End -&gt; Write-Verbose ``` Output in Automation Account as of 2020-04-29 ```powershell Begin -&gt; Write-Output Begin -&gt; Write-Verbose End -&gt; Write-Output End -&gt; Write-Verbose ``` Why is that? * If this is a known limitation/ by design, please update this documentation to say so. * If this is a bug, please send it to the correct team. * UserVoice is not appropriate because this is a bug, not a feature request. * Azure Tickets are way to time consuming and usually does not lead to anything. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: cc677cf7-8d87-d8f9-a333-b3c6b9781737 * Version Independent ID: 63a960ad-4046-40ec-5589-9e682fea8b22 * Content: [Runbook output and messages in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/automation-runbook-output-and-messages) * Content Source: [articles/automation/automation-runbook-output-and-messages.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-runbook-output-and-messages.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**
process
output in process block is not shown by design or undocumented feature using a process block inside a automation account runbook does not give any output to terminal output streams begin and end shows output though sample code powershell param begin write output inputobject begin gt write output write verbose message begin gt write verbose verbose process write output inputobject process gt write output write verbose message process gt write verbose verbose end write output inputobject end gt write output write verbose message end gt write verbose verbose output in powershell windows powershell begin gt write output verbose begin gt write verbose process gt write output verbose process gt write verbose end gt write output verbose end gt write verbose output in automation account as of powershell begin gt write output begin gt write verbose end gt write output end gt write verbose why is that if this is a known limitation by design please update this documentation to say so if this is a bug please send it to the correct team uservoice is not appropriate because this is a bug not a feature request azure tickets are way to time consuming and usually does not lead to anything document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
1
2,140
4,982,505,263
IssuesEvent
2016-12-07 11:35:11
jlm2017/jlm-video-subtitles
https://api.github.com/repos/jlm2017/jlm-video-subtitles
opened
[subtitles] [FR] Le libre-échange détruit la sidérurgie française - Invervention au Parlement européen
Language: French Process: [0] Awaiting subtitles
# Video title Le libre-échange détruit la sidérurgie française - Intervention au Parlement européen # URL https://www.youtube.com/watch?v=hwveRPW9Doc&index=12&list=PLnAm9o_Xn_3DU-PS1cGkVoMthELDzFVRT # Youtube subtitles language Français # Duration 1:05 # URL subtitles https://www.youtube.com/timedtext_editor?v=hwveRPW9Doc&ui=hd&tab=captions&bl=vmp&action_mde_edit_form=1&lang=fr&ref=watch
1.0
[subtitles] [FR] Le libre-échange détruit la sidérurgie française - Invervention au Parlement européen - # Video title Le libre-échange détruit la sidérurgie française - Intervention au Parlement européen # URL https://www.youtube.com/watch?v=hwveRPW9Doc&index=12&list=PLnAm9o_Xn_3DU-PS1cGkVoMthELDzFVRT # Youtube subtitles language Français # Duration 1:05 # URL subtitles https://www.youtube.com/timedtext_editor?v=hwveRPW9Doc&ui=hd&tab=captions&bl=vmp&action_mde_edit_form=1&lang=fr&ref=watch
process
le libre échange détruit la sidérurgie française invervention au parlement européen video title le libre échange détruit la sidérurgie française intervention au parlement européen url youtube subtitles language français duration url subtitles
1
380,192
11,255,131,854
IssuesEvent
2020-01-12 06:29:45
AugurProject/augur
https://api.github.com/repos/AugurProject/augur
opened
The interest sweep doesn't work during global shutdown.
Priority: Medium V2 Audit
https://github.com/AugurProject/augur/blob/5235fda53bd586efe3b2f0c11f0812d5eb72bd98/packages/augur-core/source/contracts/reporting/Universe.sol#L725 When Maker goes into global shutdown, DAI cannot be minted anymore, Pot DAI can only be turned into Vat DAI. The `sweepInterest` function correctly withdraws Pot DAI to Vat DAI during global shutdown, and it correctly transfers Vat DAI when necessary, but when calculating the balance of how much DAI to transfer it incorrectly uses the DAI balance rather than the DAI + Vat DAI balance. This means any Vat DAI pending will not be counted and DAI interest from shutdown on will effectively be burned. You may want to consider updating the tests to assert if anyone tries to use the `cash` contract directly at all, since even balance checks should be done against DAI + Vat DAI.
1.0
The interest sweep doesn't work during global shutdown. - https://github.com/AugurProject/augur/blob/5235fda53bd586efe3b2f0c11f0812d5eb72bd98/packages/augur-core/source/contracts/reporting/Universe.sol#L725 When Maker goes into global shutdown, DAI cannot be minted anymore, Pot DAI can only be turned into Vat DAI. The `sweepInterest` function correctly withdraws Pot DAI to Vat DAI during global shutdown, and it correctly transfers Vat DAI when necessary, but when calculating the balance of how much DAI to transfer it incorrectly uses the DAI balance rather than the DAI + Vat DAI balance. This means any Vat DAI pending will not be counted and DAI interest from shutdown on will effectively be burned. You may want to consider updating the tests to assert if anyone tries to use the `cash` contract directly at all, since even balance checks should be done against DAI + Vat DAI.
non_process
the interest sweep doesn t work during global shutdown when maker goes into global shutdown dai cannot be minted anymore pot dai can only be turned into vat dai the sweepinterest function correctly withdraws pot dai to vat dai during global shutdown and it correctly transfers vat dai when necessary but when calculating the balance of how much dai to transfer it incorrectly uses the dai balance rather than the dai vat dai balance this means any vat dai pending will not be counted and dai interest from shutdown on will effectively be burned you may want to consider updating the tests to assert if anyone tries to use the cash contract directly at all since even balance checks should be done against dai vat dai
0
69,914
30,500,784,505
IssuesEvent
2023-07-18 13:50:14
Azure/azure-cli
https://api.github.com/repos/Azure/azure-cli
reopened
learn the ETA of the latest Version CLI
Service Attention Network - DNS customer-reported Auto-Assign Auto-Resolve
Hi Team, Hope things are going well. I would like to learn the ETA of the latest Version CLI. Recently, Some customers meet similar issue in using Windows CLI to export/import DNS zone az network dns zone import -g eitb_dns -n eitb.eus --file eitb.eus.txt az network private-dns zone export -g myresourcegroup -n contoso.com -f contoso.com.txt They get same error message: (NoRegisteredProviderFound) No registered resource provider found for location 'global' and API version '2023-07-01-preview' for type 'dnszones/recordsets'. The supported api-versions are '2015-05-04-preview, 2016-04-01, 2017-09-01, 2017-09-15-preview, 2017-10-01, 2018-03-01-preview, 2018-05-01'. The supported locations are ', global'. Code: NoRegisteredProviderFound Message: No registered resource provider found for location 'global' and API version '2023-07-01-preview' for type 'dnszones/recordsets'. The supported api-versions are '2015-05-04-preview, 2016-04-01, 2017-09-01, 2017-09-15-preview, 2017-10-01, 2018-03-01-preview, 2018-05-01'. The supported locations are ', global'. I have found the possible workaround in this link and help customer resolve it by installing previous version windows CLI 2.24.0 and 2.49.0. [Listing DNS entries fails, expecting 2023-07-01-preview API version to be present. · Issue #26813 · Azure/azure-cli · GitHub](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cli%2Fissues%2F26813&data=05%7C01%7Cv-bowenli%40microsoft.com%7Ca27ae39a377e42f4f4cf08db83f39168%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638248854409784160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=1yCUSgLtfO5wA4fhkvHzvNJLcS61uDwlh12x9eScz7E%3D&reserved=0) But they want to know the ETA of the latest version windows CLI to support '2023-07-01-preview' API, could you help in it? Thank you very much for your reply!
1.0
learn the ETA of the latest Version CLI - Hi Team, Hope things are going well. I would like to learn the ETA of the latest Version CLI. Recently, Some customers meet similar issue in using Windows CLI to export/import DNS zone az network dns zone import -g eitb_dns -n eitb.eus --file eitb.eus.txt az network private-dns zone export -g myresourcegroup -n contoso.com -f contoso.com.txt They get same error message: (NoRegisteredProviderFound) No registered resource provider found for location 'global' and API version '2023-07-01-preview' for type 'dnszones/recordsets'. The supported api-versions are '2015-05-04-preview, 2016-04-01, 2017-09-01, 2017-09-15-preview, 2017-10-01, 2018-03-01-preview, 2018-05-01'. The supported locations are ', global'. Code: NoRegisteredProviderFound Message: No registered resource provider found for location 'global' and API version '2023-07-01-preview' for type 'dnszones/recordsets'. The supported api-versions are '2015-05-04-preview, 2016-04-01, 2017-09-01, 2017-09-15-preview, 2017-10-01, 2018-03-01-preview, 2018-05-01'. The supported locations are ', global'. I have found the possible workaround in this link and help customer resolve it by installing previous version windows CLI 2.24.0 and 2.49.0. [Listing DNS entries fails, expecting 2023-07-01-preview API version to be present. · Issue #26813 · Azure/azure-cli · GitHub](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cli%2Fissues%2F26813&data=05%7C01%7Cv-bowenli%40microsoft.com%7Ca27ae39a377e42f4f4cf08db83f39168%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638248854409784160%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=1yCUSgLtfO5wA4fhkvHzvNJLcS61uDwlh12x9eScz7E%3D&reserved=0) But they want to know the ETA of the latest version windows CLI to support '2023-07-01-preview' API, could you help in it? Thank you very much for your reply!
non_process
learn the eta of the latest version cli hi team hope things are going well i would like to learn the eta of the latest version cli recently some customers meet similar issue in using windows cli to export import dns zone az network dns zone import g eitb dns n eitb eus file eitb eus txt az network private dns zone export g myresourcegroup n contoso com f contoso com txt they get same error message noregisteredproviderfound no registered resource provider found for location global and api version preview for type dnszones recordsets the supported api versions are preview preview preview the supported locations are global code noregisteredproviderfound message no registered resource provider found for location global and api version preview for type dnszones recordsets the supported api versions are preview preview preview the supported locations are global i have found the possible workaround in this link and help customer resolve it by installing previous version windows cli and but they want to know the eta of the latest version windows cli to support preview api could you help in it thank you very much for your reply
0
22,144
30,684,658,557
IssuesEvent
2023-07-26 11:31:00
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
How to establish "Connect-PnPOnline" connection in PowerShell runbook using managed identity?
automation/svc triaged assigned-to-author product-question process-automation/subsvc Pri2
How to establish "Connect-PnPOnline" connection in PowerShell runbook using managed identity? Connect-PnPOnline -ManagedIdentity Seems the above command is not working. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 8a8470c7-57d1-e2ec-cc70-a43c8dfc42d6 * Version Independent ID: 2da6432e-e642-10ae-199c-9ebb1e19a5d8 * Content: [Create PowerShell runbook using managed identity in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/learn/powershell-runbook-managed-identity) * Content Source: [articles/automation/learn/powershell-runbook-managed-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/learn/powershell-runbook-managed-identity.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**
1.0
How to establish "Connect-PnPOnline" connection in PowerShell runbook using managed identity? - How to establish "Connect-PnPOnline" connection in PowerShell runbook using managed identity? Connect-PnPOnline -ManagedIdentity Seems the above command is not working. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 8a8470c7-57d1-e2ec-cc70-a43c8dfc42d6 * Version Independent ID: 2da6432e-e642-10ae-199c-9ebb1e19a5d8 * Content: [Create PowerShell runbook using managed identity in Azure Automation](https://docs.microsoft.com/en-us/azure/automation/learn/powershell-runbook-managed-identity) * Content Source: [articles/automation/learn/powershell-runbook-managed-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/learn/powershell-runbook-managed-identity.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @SGSneha * Microsoft Alias: **v-ssudhir**
process
how to establish connect pnponline connection in powershell runbook using managed identity how to establish connect pnponline connection in powershell runbook using managed identity connect pnponline managedidentity seems the above command is not working document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login sgsneha microsoft alias v ssudhir
1
7,906
11,089,904,168
IssuesEvent
2019-12-14 22:00:58
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Conkeyrefs not working correctly when map in sub-directory
needs reproduction preprocess/keyref stale
Similar to #2680, if the ditamap is in a sub-directory below my project root, for example: |-- maps root-- | |-- topics |--images Then defining standard text keys and using conkeyrefs doesn't work. The pulled in text is left blank, because the file isn't being found. I have attached a sample structure (based on the one I submitted with #2680). The map in the top directory builds correctly, that in the subdirectory doesn't. The problem appears to be a miscalculation of the relative path in getRelativePath of ConkeyrefFilter.java. I have attached a fix I've done for my own code involving changes to ConkeyrefFilter.java and Job.java. These are based on source for 2.4.6. [_Dita_ConkeyrefBug_Document.zip](https://github.com/dita-ot/dita-ot/files/1033445/_Dita_ConkeyrefBug_Document.zip) [ConkeyrefFilter.java.zip](https://github.com/dita-ot/dita-ot/files/1033446/ConkeyrefFilter.java.zip) [Job.java.zip](https://github.com/dita-ot/dita-ot/files/1033447/Job.java.zip) Further, looking at the code being developed in GitHub, the changes that are being made in KeyrefPaser.java - around lines 393-395, keydefs for an image) might introduce a similar regression. It's sort of indicating that there is an implicit assumption that maps are held at the root of the dita tree.
1.0
Conkeyrefs not working correctly when map in sub-directory - Similar to #2680, if the ditamap is in a sub-directory below my project root, for example: |-- maps root-- | |-- topics |--images Then defining standard text keys and using conkeyrefs doesn't work. The pulled in text is left blank, because the file isn't being found. I have attached a sample structure (based on the one I submitted with #2680). The map in the top directory builds correctly, that in the subdirectory doesn't. The problem appears to be a miscalculation of the relative path in getRelativePath of ConkeyrefFilter.java. I have attached a fix I've done for my own code involving changes to ConkeyrefFilter.java and Job.java. These are based on source for 2.4.6. [_Dita_ConkeyrefBug_Document.zip](https://github.com/dita-ot/dita-ot/files/1033445/_Dita_ConkeyrefBug_Document.zip) [ConkeyrefFilter.java.zip](https://github.com/dita-ot/dita-ot/files/1033446/ConkeyrefFilter.java.zip) [Job.java.zip](https://github.com/dita-ot/dita-ot/files/1033447/Job.java.zip) Further, looking at the code being developed in GitHub, the changes that are being made in KeyrefPaser.java - around lines 393-395, keydefs for an image) might introduce a similar regression. It's sort of indicating that there is an implicit assumption that maps are held at the root of the dita tree.
process
conkeyrefs not working correctly when map in sub directory similar to if the ditamap is in a sub directory below my project root for example maps root topics images then defining standard text keys and using conkeyrefs doesn t work the pulled in text is left blank because the file isn t being found i have attached a sample structure based on the one i submitted with the map in the top directory builds correctly that in the subdirectory doesn t the problem appears to be a miscalculation of the relative path in getrelativepath of conkeyreffilter java i have attached a fix i ve done for my own code involving changes to conkeyreffilter java and job java these are based on source for further looking at the code being developed in github the changes that are being made in keyrefpaser java around lines keydefs for an image might introduce a similar regression it s sort of indicating that there is an implicit assumption that maps are held at the root of the dita tree
1
26,712
4,777,627,620
IssuesEvent
2016-10-27 16:49:16
gbif/ipt
https://api.github.com/repos/gbif/ipt
closed
Deleting unregistered resource says resource is registered in dialog box
bug Component-i18n Component-UI Priority-High Type-Defect
When I want to delete an **un**registered resource, I get: ![ipt](https://cloud.githubusercontent.com/assets/600993/12320390/bced1320-baa8-11e5-93b6-495685eeffe2.png) Seems like the dialog box doesn't make a distinction between registered and unregistered, but mentions `registered` anyway?
1.0
Deleting unregistered resource says resource is registered in dialog box - When I want to delete an **un**registered resource, I get: ![ipt](https://cloud.githubusercontent.com/assets/600993/12320390/bced1320-baa8-11e5-93b6-495685eeffe2.png) Seems like the dialog box doesn't make a distinction between registered and unregistered, but mentions `registered` anyway?
non_process
deleting unregistered resource says resource is registered in dialog box when i want to delete an un registered resource i get seems like the dialog box doesn t make a distinction between registered and unregistered but mentions registered anyway
0
10,447
13,224,963,366
IssuesEvent
2020-08-17 20:12:33
googleapis/python-storage
https://api.github.com/repos/googleapis/python-storage
opened
Replace unsafe 'timestamp.mktemp' with 'timestamp.{Named,}TemporaryFile' in systests
testing type: process
Pydoc says: ``` Help on function mktemp in tempfile: tempfile.mktemp = mktemp(suffix='', prefix='tmp', dir=None) User-callable function to return a unique temporary file name. The file is not created. Arguments are as for mkstemp, except that the 'text' argument is not accepted. This function is unsafe and should not be used. The file name refers to a file that did not exist at some point, but by the time you get around to creating it, someone else may have beaten you to the punch. ```
1.0
Replace unsafe 'timestamp.mktemp' with 'timestamp.{Named,}TemporaryFile' in systests - Pydoc says: ``` Help on function mktemp in tempfile: tempfile.mktemp = mktemp(suffix='', prefix='tmp', dir=None) User-callable function to return a unique temporary file name. The file is not created. Arguments are as for mkstemp, except that the 'text' argument is not accepted. This function is unsafe and should not be used. The file name refers to a file that did not exist at some point, but by the time you get around to creating it, someone else may have beaten you to the punch. ```
process
replace unsafe timestamp mktemp with timestamp named temporaryfile in systests pydoc says help on function mktemp in tempfile tempfile mktemp mktemp suffix prefix tmp dir none user callable function to return a unique temporary file name the file is not created arguments are as for mkstemp except that the text argument is not accepted this function is unsafe and should not be used the file name refers to a file that did not exist at some point but by the time you get around to creating it someone else may have beaten you to the punch
1
348,259
24,909,968,505
IssuesEvent
2022-10-29 18:41:15
AY2223S1-CS2103T-W11-3/tp
https://api.github.com/repos/AY2223S1-CS2103T-W11-3/tp
closed
[PE-D][Tester D] Invalid command error when user follows format in UG for addcom
type.DocumentationBug
The User Guide (UG) gives the example command `addcom n/Tokyo Ghoul Kaneki f/50 d/2022-10-15` which follows the format for "Adding a commission: `addcom`". ![Screenshot 2022-10-28 at 17.29.15.png](https://raw.githubusercontent.com/carriezhengjr/ped/main/files/0995d38a-b5fb-46f5-a9cf-350d406dd2cd.png) However, this command is invalid when entered into the application. ![Screenshot 2022-10-28 at 17.29.04.png](https://raw.githubusercontent.com/carriezhengjr/ped/main/files/e07fee3d-2b6d-449f-85ea-d74a9687fcf9.png) Error message suggests that parameters should be `n/TITLE f/FEE d/DEADLINE s/COMPLETION STATUS [p/DESCRIPTION] [t/TAG]...` instead. If these are the actual expected parameters for this feature, the UG should be updated. <!--session: 1666944915699-9f02aa31-a152-4882-b5a7-bae59bd58244--><!--Version: Web v3.4.4--> ------------- Labels: `severity.Low` `type.DocumentationBug` original: carriezhengjr/ped#12
1.0
[PE-D][Tester D] Invalid command error when user follows format in UG for addcom - The User Guide (UG) gives the example command `addcom n/Tokyo Ghoul Kaneki f/50 d/2022-10-15` which follows the format for "Adding a commission: `addcom`". ![Screenshot 2022-10-28 at 17.29.15.png](https://raw.githubusercontent.com/carriezhengjr/ped/main/files/0995d38a-b5fb-46f5-a9cf-350d406dd2cd.png) However, this command is invalid when entered into the application. ![Screenshot 2022-10-28 at 17.29.04.png](https://raw.githubusercontent.com/carriezhengjr/ped/main/files/e07fee3d-2b6d-449f-85ea-d74a9687fcf9.png) Error message suggests that parameters should be `n/TITLE f/FEE d/DEADLINE s/COMPLETION STATUS [p/DESCRIPTION] [t/TAG]...` instead. If these are the actual expected parameters for this feature, the UG should be updated. <!--session: 1666944915699-9f02aa31-a152-4882-b5a7-bae59bd58244--><!--Version: Web v3.4.4--> ------------- Labels: `severity.Low` `type.DocumentationBug` original: carriezhengjr/ped#12
non_process
invalid command error when user follows format in ug for addcom the user guide ug gives the example command addcom n tokyo ghoul kaneki f d which follows the format for adding a commission addcom however this command is invalid when entered into the application error message suggests that parameters should be n title f fee d deadline s completion status instead if these are the actual expected parameters for this feature the ug should be updated labels severity low type documentationbug original carriezhengjr ped
0
2,929
5,917,193,370
IssuesEvent
2017-05-22 12:40:07
intelsdi-x/snap
https://api.github.com/repos/intelsdi-x/snap
closed
Plugin wanted: threshold processor
plugin-wishlist/processor
I'd like to have a processor plugin to filter metrics with task defined threshold.
1.0
Plugin wanted: threshold processor - I'd like to have a processor plugin to filter metrics with task defined threshold.
process
plugin wanted threshold processor i d like to have a processor plugin to filter metrics with task defined threshold
1
764,405
26,798,872,198
IssuesEvent
2023-02-01 13:51:18
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.notion.so - site is not usable
browser-firefox priority-important os-linux engine-gecko
<!-- @browser: Firefox 78.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/117679 --> **URL**: https://www.notion.so/signup **Browser / Version**: Firefox 78.0 **Operating System**: Linux **Tested Another Browser**: Yes Opera **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: When I try to login into Notion, the site won't load. Same issue appears if I try to sign up. The site works fine before I try to do either of those things <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/2/0a86aadd-8244-4338-a696-c1269a396837.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210531140902</li><li>channel: esr78</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2023/2/2f513b9e-88c5-4807-a5eb-f76c5b932657) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.notion.so - site is not usable - <!-- @browser: Firefox 78.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/117679 --> **URL**: https://www.notion.so/signup **Browser / Version**: Firefox 78.0 **Operating System**: Linux **Tested Another Browser**: Yes Opera **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: When I try to login into Notion, the site won't load. Same issue appears if I try to sign up. The site works fine before I try to do either of those things <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2023/2/0a86aadd-8244-4338-a696-c1269a396837.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210531140902</li><li>channel: esr78</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2023/2/2f513b9e-88c5-4807-a5eb-f76c5b932657) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
site is not usable url browser version firefox operating system linux tested another browser yes opera problem type site is not usable description page not loading correctly steps to reproduce when i try to login into notion the site won t load same issue appears if i try to sign up the site works fine before i try to do either of those things view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
0
15,764
19,913,027,286
IssuesEvent
2022-01-25 19:13:19
input-output-hk/high-assurance-legacy
https://api.github.com/repos/input-output-hk/high-assurance-legacy
closed
Remove the `output_rest` interpretation of the `residual` locale
language: isabelle topic: process calculus type: improvement
Currently, the `residual` locale has an interpretation for output rests, although output rests aren’t morally residuals. The reason is that the manual definition of `proper_lift` and the manually conducted proofs of its properties referred to `output_rest_lift` and its properties. Meanwhile, we use Isabelle’s support for residuals to define `proper_lift` and show its properties. Therefore, there is no justification for having the `output_rest` interpretation of `residual` anymore, and thus we want to remove it.
1.0
Remove the `output_rest` interpretation of the `residual` locale - Currently, the `residual` locale has an interpretation for output rests, although output rests aren’t morally residuals. The reason is that the manual definition of `proper_lift` and the manually conducted proofs of its properties referred to `output_rest_lift` and its properties. Meanwhile, we use Isabelle’s support for residuals to define `proper_lift` and show its properties. Therefore, there is no justification for having the `output_rest` interpretation of `residual` anymore, and thus we want to remove it.
process
remove the output rest interpretation of the residual locale currently the residual locale has an interpretation for output rests although output rests aren’t morally residuals the reason is that the manual definition of proper lift and the manually conducted proofs of its properties referred to output rest lift and its properties meanwhile we use isabelle’s support for residuals to define proper lift and show its properties therefore there is no justification for having the output rest interpretation of residual anymore and thus we want to remove it
1
15,770
19,915,010,904
IssuesEvent
2022-01-25 21:28:47
medic/cht-core
https://api.github.com/repos/medic/cht-core
closed
Release 3.13.1
Type: Internal process
# Planning - Product Manager - [x] Create an GH Milestone and add this issue to it. - [x] Add all the issues to be worked on to the Milestone. # Development - Release Engineer When development is ready to begin one of the engineers should be nominated as a Release Engineer. They will be responsible for making sure the following tasks are completed though not necessarily completing them. - [x] Set the version number in `package.json` and `package-lock.json` and submit a PR to the release branch. The easiest way to do this is to use `npm --no-git-tag-version version patch`. - [ ] Write an update in the weekly Product Team call agenda summarising development and acceptance testing progress and identifying any blockers. The Release Engineer is to update this every week until the version is released. # Releasing - Release Engineer Once all issues have passed acceptance testing and have been merged into `master` and backported to the release branch release testing can begin. - [x] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing. - [ ] Create a new document in the [release-notes folder](https://github.com/medic/cht-core/tree/master/release-notes) in `master`. Ensure all issues are in the GH Milestone, that they're correct labelled, and have human readable descriptions. Use [this script](https://github.com/medic/cht-core/blob/master/scripts/release-notes/) to export the issues into our release note format. Manually document any known migration steps and known issues. - [ ] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta. - [ ] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/cht-core/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release. - [ ] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>` - [ ] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/), under the "Product - Releases" category using this template: ``` @channel *Announcing the release of {{version}}* This release fixes {{number of bugs}}. Read the release notes for full details: {{url}} ``` - [ ] Mark this issue "done" and close the Milestone.
1.0
Release 3.13.1 - # Planning - Product Manager - [x] Create an GH Milestone and add this issue to it. - [x] Add all the issues to be worked on to the Milestone. # Development - Release Engineer When development is ready to begin one of the engineers should be nominated as a Release Engineer. They will be responsible for making sure the following tasks are completed though not necessarily completing them. - [x] Set the version number in `package.json` and `package-lock.json` and submit a PR to the release branch. The easiest way to do this is to use `npm --no-git-tag-version version patch`. - [ ] Write an update in the weekly Product Team call agenda summarising development and acceptance testing progress and identifying any blockers. The Release Engineer is to update this every week until the version is released. # Releasing - Release Engineer Once all issues have passed acceptance testing and have been merged into `master` and backported to the release branch release testing can begin. - [x] Build a beta named `<major>.<minor>.<patch>-beta.1` by pushing a git tag and when CI completes successfully notify the QA team that it's ready for release testing. - [ ] Create a new document in the [release-notes folder](https://github.com/medic/cht-core/tree/master/release-notes) in `master`. Ensure all issues are in the GH Milestone, that they're correct labelled, and have human readable descriptions. Use [this script](https://github.com/medic/cht-core/blob/master/scripts/release-notes/) to export the issues into our release note format. Manually document any known migration steps and known issues. - [ ] Until release testing passes, make sure regressions are fixed in `master`, cherry-pick them into the release branch, and release another beta. - [ ] Create a release in GitHub from the release branch so it shows up under the [Releases tab](https://github.com/medic/cht-core/releases) with the naming convention `<major>.<minor>.<patch>`. This will create the git tag automatically. Link to the release notes in the description of the release. - [ ] Confirm the release build completes successfully and the new release is available on the [market](https://staging.dev.medicmobile.org/builds/releases). Make sure that the document has new entry with `id: medic:medic:<major>.<minor>.<patch>` - [ ] Announce the release on the [CHT forum](https://forum.communityhealthtoolkit.org/), under the "Product - Releases" category using this template: ``` @channel *Announcing the release of {{version}}* This release fixes {{number of bugs}}. Read the release notes for full details: {{url}} ``` - [ ] Mark this issue "done" and close the Milestone.
process
release planning product manager create an gh milestone and add this issue to it add all the issues to be worked on to the milestone development release engineer when development is ready to begin one of the engineers should be nominated as a release engineer they will be responsible for making sure the following tasks are completed though not necessarily completing them set the version number in package json and package lock json and submit a pr to the release branch the easiest way to do this is to use npm no git tag version version patch write an update in the weekly product team call agenda summarising development and acceptance testing progress and identifying any blockers the release engineer is to update this every week until the version is released releasing release engineer once all issues have passed acceptance testing and have been merged into master and backported to the release branch release testing can begin build a beta named beta by pushing a git tag and when ci completes successfully notify the qa team that it s ready for release testing create a new document in the in master ensure all issues are in the gh milestone that they re correct labelled and have human readable descriptions use to export the issues into our release note format manually document any known migration steps and known issues until release testing passes make sure regressions are fixed in master cherry pick them into the release branch and release another beta create a release in github from the release branch so it shows up under the with the naming convention this will create the git tag automatically link to the release notes in the description of the release confirm the release build completes successfully and the new release is available on the make sure that the document has new entry with id medic medic announce the release on the under the product releases category using this template channel announcing the release of version this release fixes number of bugs read the release notes for full details url mark this issue done and close the milestone
1
24,815
12,152,856,281
IssuesEvent
2020-04-24 23:38:11
Azure/azure-sdk-for-js
https://api.github.com/repos/Azure/azure-sdk-for-js
closed
Service Bus Send Reliability
Client Service Bus customer-reported
- **Package Name**: azure/service-bus - **Package Version**: 1.1.1 - **Operating system**: linux node:8.16-slim **Describe the bug** Sending messages to service bus will often encounter errors during platform service bus updates, deployments and other transient issues. 1. Sends taking too long, There appears to be a very long or no timeouts for send. Further, there is so SLA or any guidelines to how to handle sends when they are taking longer than usual. For reference here is an icm: https://icm.ad.msft.net/imp/v3/incidents/details/177472461/home . There needs to be a built in **send timeout support**, the sdk needs to **understand platform connectivity issues and be able to retry sends internally**. 2. We currently have a 10 second timeout ourselves. We have gotten into a situation where all (4) containers in a cluster (multi-cluster) will get send timeouts for every send until we restart the services. I am assuming this is due to the current retry logic (150 retries?) taking longer than our 10 seconds so we will timeout and never get the actual error that is causing the issue. The sdk needs **custom retry logic** and an **alternative way to receive these errors** if we are forced to timeout before we would get the error. 3. One of our containers got a "Too much pending tasks" after one of these send timeout issues. Only solution was to restart the container. Apparently this was due to more than 1000 sends going on. The sdk should have **better clean up of failed sends and be able to support more than 1000 sendings messages** (possibly needs more underlying active amqp links?).
1.0
Service Bus Send Reliability - - **Package Name**: azure/service-bus - **Package Version**: 1.1.1 - **Operating system**: linux node:8.16-slim **Describe the bug** Sending messages to service bus will often encounter errors during platform service bus updates, deployments and other transient issues. 1. Sends taking too long, There appears to be a very long or no timeouts for send. Further, there is so SLA or any guidelines to how to handle sends when they are taking longer than usual. For reference here is an icm: https://icm.ad.msft.net/imp/v3/incidents/details/177472461/home . There needs to be a built in **send timeout support**, the sdk needs to **understand platform connectivity issues and be able to retry sends internally**. 2. We currently have a 10 second timeout ourselves. We have gotten into a situation where all (4) containers in a cluster (multi-cluster) will get send timeouts for every send until we restart the services. I am assuming this is due to the current retry logic (150 retries?) taking longer than our 10 seconds so we will timeout and never get the actual error that is causing the issue. The sdk needs **custom retry logic** and an **alternative way to receive these errors** if we are forced to timeout before we would get the error. 3. One of our containers got a "Too much pending tasks" after one of these send timeout issues. Only solution was to restart the container. Apparently this was due to more than 1000 sends going on. The sdk should have **better clean up of failed sends and be able to support more than 1000 sendings messages** (possibly needs more underlying active amqp links?).
non_process
service bus send reliability package name azure service bus package version operating system linux node slim describe the bug sending messages to service bus will often encounter errors during platform service bus updates deployments and other transient issues sends taking too long there appears to be a very long or no timeouts for send further there is so sla or any guidelines to how to handle sends when they are taking longer than usual for reference here is an icm there needs to be a built in send timeout support the sdk needs to understand platform connectivity issues and be able to retry sends internally we currently have a second timeout ourselves we have gotten into a situation where all containers in a cluster multi cluster will get send timeouts for every send until we restart the services i am assuming this is due to the current retry logic retries taking longer than our seconds so we will timeout and never get the actual error that is causing the issue the sdk needs custom retry logic and an alternative way to receive these errors if we are forced to timeout before we would get the error one of our containers got a too much pending tasks after one of these send timeout issues only solution was to restart the container apparently this was due to more than sends going on the sdk should have better clean up of failed sends and be able to support more than sendings messages possibly needs more underlying active amqp links
0
22,614
31,841,352,959
IssuesEvent
2023-09-14 16:32:28
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
metaflow-netflixext 1.0.2 has 1 GuardDog issues
guarddog silent-process-execution
https://pypi.org/project/metaflow-netflixext https://inspector.pypi.io/project/metaflow-netflixext ```{ "dependency": "metaflow-netflixext", "version": "1.0.2", "result": { "issues": 1, "errors": {}, "results": { "silent-process-execution": [ { "location": "metaflow-netflixext-1.0.2/metaflow_extensions/netflix_ext/plugins/conda/conda.py:2526", "code": " p = subprocess.Popen(\n [\n self._bins[\"micromamba\"],\n \"-r\",\n os.path.dirname(self._package_dirs[0]),\n \"server\",\n \"-p\",\n... )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmp9zplm8rt/metaflow-netflixext" } }```
1.0
metaflow-netflixext 1.0.2 has 1 GuardDog issues - https://pypi.org/project/metaflow-netflixext https://inspector.pypi.io/project/metaflow-netflixext ```{ "dependency": "metaflow-netflixext", "version": "1.0.2", "result": { "issues": 1, "errors": {}, "results": { "silent-process-execution": [ { "location": "metaflow-netflixext-1.0.2/metaflow_extensions/netflix_ext/plugins/conda/conda.py:2526", "code": " p = subprocess.Popen(\n [\n self._bins[\"micromamba\"],\n \"-r\",\n os.path.dirname(self._package_dirs[0]),\n \"server\",\n \"-p\",\n... )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmp9zplm8rt/metaflow-netflixext" } }```
process
metaflow netflixext has guarddog issues dependency metaflow netflixext version result issues errors results silent process execution location metaflow netflixext metaflow extensions netflix ext plugins conda conda py code p subprocess popen n n r n os path dirname self package dirs n server n p n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp metaflow netflixext
1
1,645
4,269,584,495
IssuesEvent
2016-07-13 01:18:29
ParsePlatform/parse-server
https://api.github.com/repos/ParsePlatform/parse-server
closed
Group chat for repository (Gitter)
in-process
I noticed that a lot of tasks are created just to discuss or ask some question. It would be probably better to use group chat for that, something like Gitter or Slack. I think Gitter would be better, because it stores chat logs forever. I checked that you were asked for having Gitter chat already in #566 and #1366. Setting up channel in Gitter is pretty straightforward. Let me know if you need any help with it.
1.0
Group chat for repository (Gitter) - I noticed that a lot of tasks are created just to discuss or ask some question. It would be probably better to use group chat for that, something like Gitter or Slack. I think Gitter would be better, because it stores chat logs forever. I checked that you were asked for having Gitter chat already in #566 and #1366. Setting up channel in Gitter is pretty straightforward. Let me know if you need any help with it.
process
group chat for repository gitter i noticed that a lot of tasks are created just to discuss or ask some question it would be probably better to use group chat for that something like gitter or slack i think gitter would be better because it stores chat logs forever i checked that you were asked for having gitter chat already in and setting up channel in gitter is pretty straightforward let me know if you need any help with it
1
3,991
6,918,519,960
IssuesEvent
2017-11-29 12:31:19
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Should emulate native browser behaviour related with trailing slashes for location properties
!IMPORTANT! AREA: client SYSTEM: URL processing TYPE: bug
Related with https://testcafe-discuss.devexpress.com/t/getting-infinite-angular-digest-cycles-only-with-testcafe/557/7 Code to reproduce: ```js Page location is 'http://example.com' Without proxing: location.href: 'https://example.com/' With proxing: location.href: 'https://example.com' ```
1.0
Should emulate native browser behaviour related with trailing slashes for location properties - Related with https://testcafe-discuss.devexpress.com/t/getting-infinite-angular-digest-cycles-only-with-testcafe/557/7 Code to reproduce: ```js Page location is 'http://example.com' Without proxing: location.href: 'https://example.com/' With proxing: location.href: 'https://example.com' ```
process
should emulate native browser behaviour related with trailing slashes for location properties related with code to reproduce js page location is without proxing location href with proxing location href
1
20,780
27,516,889,060
IssuesEvent
2023-03-06 12:35:59
alphagov/govuk-design-system
https://api.github.com/repos/alphagov/govuk-design-system
closed
Build a publishing plan for Exit this Page
process
## What Scope remaining stories for mvp and build a publishing plan. ## Why So the team and stakeholders know what's left to do and we can give ourselves healthy targets to work towards. ## Who needs to work on Kelly ## Who needs to review this Steve, Katrina, Ciandelle, David, Owen, Beeps, Calvin ## Done when - [ ] Run a workshop to explore remaining actions - [ ] Estimate timescales - [ ] Document decisions
1.0
Build a publishing plan for Exit this Page - ## What Scope remaining stories for mvp and build a publishing plan. ## Why So the team and stakeholders know what's left to do and we can give ourselves healthy targets to work towards. ## Who needs to work on Kelly ## Who needs to review this Steve, Katrina, Ciandelle, David, Owen, Beeps, Calvin ## Done when - [ ] Run a workshop to explore remaining actions - [ ] Estimate timescales - [ ] Document decisions
process
build a publishing plan for exit this page what scope remaining stories for mvp and build a publishing plan why so the team and stakeholders know what s left to do and we can give ourselves healthy targets to work towards who needs to work on kelly who needs to review this steve katrina ciandelle david owen beeps calvin done when run a workshop to explore remaining actions estimate timescales document decisions
1
131,496
18,247,955,725
IssuesEvent
2021-10-01 21:22:36
turkdevops/grafana
https://api.github.com/repos/turkdevops/grafana
closed
CVE-2018-14042 (Medium) detected in bootstrap-3.3.7.js - autoclosed
security vulnerability
## CVE-2018-14042 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.js</a></p> <p>Path to dependency file: grafana/node_modules/fmin/examples/nelder_mead.html</p> <p>Path to vulnerable library: grafana/node_modules/fmin/examples/nelder_mead.html</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.3.7.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/grafana/commit/494d3ade5b02fb069ecdd7a9a278fd2016f5f577">494d3ade5b02fb069ecdd7a9a278fd2016f5f577</a></p> <p>Found in base branch: <b>datasource-meta</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip. <p>Publish Date: 2018-07-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042>CVE-2018-14042</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p> <p>Release Date: 2018-07-13</p> <p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2.org.webjars:bootstrap:3.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-14042 (Medium) detected in bootstrap-3.3.7.js - autoclosed - ## CVE-2018-14042 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.js</a></p> <p>Path to dependency file: grafana/node_modules/fmin/examples/nelder_mead.html</p> <p>Path to vulnerable library: grafana/node_modules/fmin/examples/nelder_mead.html</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.3.7.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/grafana/commit/494d3ade5b02fb069ecdd7a9a278fd2016f5f577">494d3ade5b02fb069ecdd7a9a278fd2016f5f577</a></p> <p>Found in base branch: <b>datasource-meta</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap before 4.1.2, XSS is possible in the data-container property of tooltip. <p>Publish Date: 2018-07-13 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14042>CVE-2018-14042</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/twbs/bootstrap/pull/26630">https://github.com/twbs/bootstrap/pull/26630</a></p> <p>Release Date: 2018-07-13</p> <p>Fix Resolution: org.webjars.npm:bootstrap:4.1.2.org.webjars:bootstrap:3.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in bootstrap js autoclosed cve medium severity vulnerability vulnerable library bootstrap js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file grafana node modules fmin examples nelder mead html path to vulnerable library grafana node modules fmin examples nelder mead html dependency hierarchy x bootstrap js vulnerable library found in head commit a href found in base branch datasource meta vulnerability details in bootstrap before xss is possible in the data container property of tooltip publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org webjars npm bootstrap org webjars bootstrap step up your open source security game with whitesource
0
1,069
3,536,080,960
IssuesEvent
2016-01-17 00:34:43
MaretEngineering/MROV
https://api.github.com/repos/MaretEngineering/MROV
closed
Add a vertical thrust label
enhancement Processing
Somewhere near the right trigger and left trigger but make it look nice. It should have the actual thrust value being sent.
1.0
Add a vertical thrust label - Somewhere near the right trigger and left trigger but make it look nice. It should have the actual thrust value being sent.
process
add a vertical thrust label somewhere near the right trigger and left trigger but make it look nice it should have the actual thrust value being sent
1
14,802
18,102,902,911
IssuesEvent
2021-09-22 15:54:37
yandali-damian/LIM015-social-network
https://api.github.com/repos/yandali-damian/LIM015-social-network
opened
Correcciones del segundo sprint
pending Process
>-[ ] Modificar imagen de logo en login y signup >-[ ] Validar email y password en login >-[ ] Implementar funcionalidad del botón de google en signup. >-[ ] Capturar error de correo ya registrado. >-[ ] Validar el confirma contraseña para que no redireccione sin ella.
1.0
Correcciones del segundo sprint - >-[ ] Modificar imagen de logo en login y signup >-[ ] Validar email y password en login >-[ ] Implementar funcionalidad del botón de google en signup. >-[ ] Capturar error de correo ya registrado. >-[ ] Validar el confirma contraseña para que no redireccione sin ella.
process
correcciones del segundo sprint modificar imagen de logo en login y signup validar email y password en login implementar funcionalidad del botón de google en signup capturar error de correo ya registrado validar el confirma contraseña para que no redireccione sin ella
1
1,839
2,671,760,210
IssuesEvent
2015-03-24 09:39:56
McStasMcXtrace/McCode
https://api.github.com/repos/McStasMcXtrace/McCode
closed
missing argument of an outf
bug C: McCode kernel P: minor
**Reported by erkn on 11 Oct 2012 21:05 UTC** there is an ID_PRE to few at line 1059 in src/cogen.c.in
1.0
missing argument of an outf - **Reported by erkn on 11 Oct 2012 21:05 UTC** there is an ID_PRE to few at line 1059 in src/cogen.c.in
non_process
missing argument of an outf reported by erkn on oct utc there is an id pre to few at line in src cogen c in
0
23,247
10,863,197,541
IssuesEvent
2019-11-14 14:43:09
benchmarkdebricked/groovy
https://api.github.com/repos/benchmarkdebricked/groovy
opened
CVE-2017-5645 (High) detected in log4j-core-2.8.jar
security vulnerability
## CVE-2017-5645 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.8.jar</b></p></summary> <p>The Apache Log4j Implementation</p> <p>Library home page: <a href="https://logging.apache.org/log4j/2.x/log4j-core/">https://logging.apache.org/log4j/2.x/log4j-core/</a></p> <p>Path to vulnerable library: /groovy/build.gradle,/groovy/build.gradle</p> <p> Dependency Hierarchy: - :x: **log4j-core-2.8.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/benchmarkdebricked/groovy/commit/bb7b5cf7f2cc9073d01ec39d0517e42ddb9890c5">bb7b5cf7f2cc9073d01ec39d0517e42ddb9890c5</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Apache Log4j 2.x before 2.8.2, when using the TCP socket server or UDP socket server to receive serialized log events from another application, a specially crafted binary payload can be sent that, when deserialized, can execute arbitrary code. <p>Publish Date: 2017-04-17 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5645>CVE-2017-5645</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5645">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5645</a></p> <p>Release Date: 2017-04-17</p> <p>Fix Resolution: 2.8.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-5645 (High) detected in log4j-core-2.8.jar - ## CVE-2017-5645 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.8.jar</b></p></summary> <p>The Apache Log4j Implementation</p> <p>Library home page: <a href="https://logging.apache.org/log4j/2.x/log4j-core/">https://logging.apache.org/log4j/2.x/log4j-core/</a></p> <p>Path to vulnerable library: /groovy/build.gradle,/groovy/build.gradle</p> <p> Dependency Hierarchy: - :x: **log4j-core-2.8.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/benchmarkdebricked/groovy/commit/bb7b5cf7f2cc9073d01ec39d0517e42ddb9890c5">bb7b5cf7f2cc9073d01ec39d0517e42ddb9890c5</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Apache Log4j 2.x before 2.8.2, when using the TCP socket server or UDP socket server to receive serialized log events from another application, a specially crafted binary payload can be sent that, when deserialized, can execute arbitrary code. <p>Publish Date: 2017-04-17 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5645>CVE-2017-5645</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5645">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5645</a></p> <p>Release Date: 2017-04-17</p> <p>Fix Resolution: 2.8.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in core jar cve high severity vulnerability vulnerable library core jar the apache implementation library home page a href path to vulnerable library groovy build gradle groovy build gradle dependency hierarchy x core jar vulnerable library found in head commit a href vulnerability details in apache x before when using the tcp socket server or udp socket server to receive serialized log events from another application a specially crafted binary payload can be sent that when deserialized can execute arbitrary code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
14,243
17,172,627,600
IssuesEvent
2021-07-15 07:26:22
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
New standalone console tool for running processing algorithms (Request in QGIS)
3.14 Processing Tools
### Request for documentation From pull request QGIS/qgis#34617 Author: @nyalldawson QGIS version: 3.14 (Feature) **New standalone console tool for running processing algorithms** ### PR Description: **UPDATE**: the final tool is called `qgis_process`, all examples here have been updated to reflect the change This new ~~qgis_transform~~ qgis_process tool allows users to run processing algorithms (both built-in, and those provided by plugins) directly from the console. Running: - `qgis_process list` will output a complete list of all available algorithms, grouped by provider. - `qgis_process plugins` lists available and activated plugins which advertise the hasProcessingProvider metadata option (only these plugins are loaded by the tool) - `qgis_process help algid` outputs the help and input descriptions for the specified algorithm, e.g. `qgis_process help native:centroids` `qgis_process run`: runs an algorithm. Parameters are specified by a `--param=value` syntax. E.g. qgis_process run native:centroids --INPUT="my_shapefile.shp" --OUTPUT="centroids.kml" or qgis_process run native:buffer --INPUT=/home/me/my.shp --DISTANCE=20 --OUTPUT=/home/me/buffered.shp While running an algorithm a text-based feedback bar is shown, and the operation can be cancelled via CTRL+C Sponsored by the Swedish User Group More detail is available in https://github.com/qgis/QGIS-Enhancement-Proposals/issues/140 Still to come (follow up PRs): - running a model directly from a model file - running a script directly from an algorithm script file - using the .env variable file approach when running under windows ### Commits tagged with [need-docs] or [FEATURE] "[FEATURE][processing] New standalone console tool for running processing algorithms\n\nThis new qgis_transform tool allows users to run processing algorithms\n(both built-in, and those provided by plugins) directly from the console.\n\nRunning:\n\n- \"qgis_transform list\" will output a complete list of all available\nalgorithms, grouped by provider.\n- \"qgis_transform plugins\" lists available and activated plugins which\nadvertise the hasProcessingProvider metadata option (only these plugins\nare loaded by the tool)\n- \"qgis_transform help algid\" outputs the help and input descriptions\nfor the specified algorithm, e.g. \"qgis_transform help native:centroids\"\n\n\"qgis_transform run\": runs an algorithm. Parameters are specified by a\n\"--param=value\" syntax. E.g.\n\n qgis_transform run native:centroids --INPUT=\"my_shapefile.shp\" --OUTPUT=\"centroids.kml\"\n\nor\n\n qgis_transform run native:buffer --INPUT=/home/me/my.shp --DISTANCE=20 --OUTPUT=/home/me/buffered.shp\n\nWhile running an algorithm a text-based feedback bar is shown, and the\noperation can be cancelled via CTRL+C\n\nSponsored by the Swedish User Group" "[FEATURE] Allow running model files direct from standalone qgis_process tool"
1.0
New standalone console tool for running processing algorithms (Request in QGIS) - ### Request for documentation From pull request QGIS/qgis#34617 Author: @nyalldawson QGIS version: 3.14 (Feature) **New standalone console tool for running processing algorithms** ### PR Description: **UPDATE**: the final tool is called `qgis_process`, all examples here have been updated to reflect the change This new ~~qgis_transform~~ qgis_process tool allows users to run processing algorithms (both built-in, and those provided by plugins) directly from the console. Running: - `qgis_process list` will output a complete list of all available algorithms, grouped by provider. - `qgis_process plugins` lists available and activated plugins which advertise the hasProcessingProvider metadata option (only these plugins are loaded by the tool) - `qgis_process help algid` outputs the help and input descriptions for the specified algorithm, e.g. `qgis_process help native:centroids` `qgis_process run`: runs an algorithm. Parameters are specified by a `--param=value` syntax. E.g. qgis_process run native:centroids --INPUT="my_shapefile.shp" --OUTPUT="centroids.kml" or qgis_process run native:buffer --INPUT=/home/me/my.shp --DISTANCE=20 --OUTPUT=/home/me/buffered.shp While running an algorithm a text-based feedback bar is shown, and the operation can be cancelled via CTRL+C Sponsored by the Swedish User Group More detail is available in https://github.com/qgis/QGIS-Enhancement-Proposals/issues/140 Still to come (follow up PRs): - running a model directly from a model file - running a script directly from an algorithm script file - using the .env variable file approach when running under windows ### Commits tagged with [need-docs] or [FEATURE] "[FEATURE][processing] New standalone console tool for running processing algorithms\n\nThis new qgis_transform tool allows users to run processing algorithms\n(both built-in, and those provided by plugins) directly from the console.\n\nRunning:\n\n- \"qgis_transform list\" will output a complete list of all available\nalgorithms, grouped by provider.\n- \"qgis_transform plugins\" lists available and activated plugins which\nadvertise the hasProcessingProvider metadata option (only these plugins\nare loaded by the tool)\n- \"qgis_transform help algid\" outputs the help and input descriptions\nfor the specified algorithm, e.g. \"qgis_transform help native:centroids\"\n\n\"qgis_transform run\": runs an algorithm. Parameters are specified by a\n\"--param=value\" syntax. E.g.\n\n qgis_transform run native:centroids --INPUT=\"my_shapefile.shp\" --OUTPUT=\"centroids.kml\"\n\nor\n\n qgis_transform run native:buffer --INPUT=/home/me/my.shp --DISTANCE=20 --OUTPUT=/home/me/buffered.shp\n\nWhile running an algorithm a text-based feedback bar is shown, and the\noperation can be cancelled via CTRL+C\n\nSponsored by the Swedish User Group" "[FEATURE] Allow running model files direct from standalone qgis_process tool"
process
new standalone console tool for running processing algorithms request in qgis request for documentation from pull request qgis qgis author nyalldawson qgis version feature new standalone console tool for running processing algorithms pr description update the final tool is called qgis process all examples here have been updated to reflect the change this new qgis transform qgis process tool allows users to run processing algorithms both built in and those provided by plugins directly from the console running qgis process list will output a complete list of all available algorithms grouped by provider qgis process plugins lists available and activated plugins which advertise the hasprocessingprovider metadata option only these plugins are loaded by the tool qgis process help algid outputs the help and input descriptions for the specified algorithm e g qgis process help native centroids qgis process run runs an algorithm parameters are specified by a param value syntax e g qgis process run native centroids input my shapefile shp output centroids kml or qgis process run native buffer input home me my shp distance output home me buffered shp while running an algorithm a text based feedback bar is shown and the operation can be cancelled via ctrl c sponsored by the swedish user group more detail is available in still to come follow up prs running a model directly from a model file running a script directly from an algorithm script file using the env variable file approach when running under windows commits tagged with or new standalone console tool for running processing algorithms n nthis new qgis transform tool allows users to run processing algorithms n both built in and those provided by plugins directly from the console n nrunning n n qgis transform list will output a complete list of all available nalgorithms grouped by provider n qgis transform plugins lists available and activated plugins which nadvertise the hasprocessingprovider metadata option only these plugins nare loaded by the tool n qgis transform help algid outputs the help and input descriptions nfor the specified algorithm e g qgis transform help native centroids n n qgis transform run runs an algorithm parameters are specified by a n param value syntax e g n n qgis transform run native centroids input my shapefile shp output centroids kml n nor n n qgis transform run native buffer input home me my shp distance output home me buffered shp n nwhile running an algorithm a text based feedback bar is shown and the noperation can be cancelled via ctrl c n nsponsored by the swedish user group allow running model files direct from standalone qgis process tool
1
25,451
25,206,981,785
IssuesEvent
2022-11-13 20:02:44
bevyengine/bevy
https://api.github.com/repos/bevyengine/bevy
opened
Add an additional constructor method to `ManualEventReader`, which starts at the current frame's events0
A-ECS C-Usability
## What problem does this solve or what need does it fill? #5730 introduced a generally helpful error message, > 2022-11-13T19:35:54.363800Z WARN bevy_ecs::event: Missed 345 `bevy_input::mouse::MouseMotion` events. Consider reading from the `EventReader` more often (generally the best solution) or calling Events::update() less frequently (normally this is called once per frame). This problem is most likely due to run criteria/fixed timesteps or consuming events conditionally. See the Events documentation for more information. However, when working with `ManualEventReader` to read recent events via [`Events:;get_reader`](https://docs.rs/bevy/latest/bevy/ecs/event/struct.Events.html#method.get_reader), this error is triggered constantly. ## What solution would you like? The correct solution here, is to instead initialize the manual event reader at the start of the events that occured this frame. This avoids double-reading, and silences the warning. In order to do this however, `bevy_ecs` needs to expose a constructor ala `get_current` that sets the internal `events_seen` field correctly. ## What alternative(s) have you considered? We could instead (or additionally) allow public construction of `EventReader` from `Events`, but this feels more likely to be confused and misused. Adding public access to `ManualEventReader` of `EventReader` may work to solve my particular issue as well, albeit in a bit messier of a way.
True
Add an additional constructor method to `ManualEventReader`, which starts at the current frame's events0 - ## What problem does this solve or what need does it fill? #5730 introduced a generally helpful error message, > 2022-11-13T19:35:54.363800Z WARN bevy_ecs::event: Missed 345 `bevy_input::mouse::MouseMotion` events. Consider reading from the `EventReader` more often (generally the best solution) or calling Events::update() less frequently (normally this is called once per frame). This problem is most likely due to run criteria/fixed timesteps or consuming events conditionally. See the Events documentation for more information. However, when working with `ManualEventReader` to read recent events via [`Events:;get_reader`](https://docs.rs/bevy/latest/bevy/ecs/event/struct.Events.html#method.get_reader), this error is triggered constantly. ## What solution would you like? The correct solution here, is to instead initialize the manual event reader at the start of the events that occured this frame. This avoids double-reading, and silences the warning. In order to do this however, `bevy_ecs` needs to expose a constructor ala `get_current` that sets the internal `events_seen` field correctly. ## What alternative(s) have you considered? We could instead (or additionally) allow public construction of `EventReader` from `Events`, but this feels more likely to be confused and misused. Adding public access to `ManualEventReader` of `EventReader` may work to solve my particular issue as well, albeit in a bit messier of a way.
non_process
add an additional constructor method to manualeventreader which starts at the current frame s what problem does this solve or what need does it fill introduced a generally helpful error message warn bevy ecs event missed bevy input mouse mousemotion events consider reading from the eventreader more often generally the best solution or calling events update less frequently normally this is called once per frame this problem is most likely due to run criteria fixed timesteps or consuming events conditionally see the events documentation for more information however when working with manualeventreader to read recent events via this error is triggered constantly what solution would you like the correct solution here is to instead initialize the manual event reader at the start of the events that occured this frame this avoids double reading and silences the warning in order to do this however bevy ecs needs to expose a constructor ala get current that sets the internal events seen field correctly what alternative s have you considered we could instead or additionally allow public construction of eventreader from events but this feels more likely to be confused and misused adding public access to manualeventreader of eventreader may work to solve my particular issue as well albeit in a bit messier of a way
0
19,406
13,226,727,554
IssuesEvent
2020-08-18 00:50:52
microsoft/react-native-windows
https://api.github.com/repos/microsoft/react-native-windows
closed
E2ETest: VisitAllPages instability
Area: Test Infrastructure bug
We're seeing an instability in the new VisitAllPages test. Log snippet from one of the failures: 2020-08-07T08:09:13.1994088Z [0-7] loading page PlatformColor 2020-08-07T08:09:13.1995129Z 2020-08-07T08:09:13.193Z INFO webdriver: COMMAND findElement("accessibility id", "PlatformColor") 2020-08-07T08:09:13.1996630Z 2020-08-07T08:09:13.198Z INFO webdriver: [POST] http://127.0.0.1:4723/wd/hub/session/2151e415-2444-4c9d-a520-7afe5f1b3a64/element 2020-08-07T08:09:13.1997759Z [0-7] 2020-08-07T08:09:13.198Z INFO webdriver: DATA { using: 'accessibility id', value: 'PlatformColor' } 2020-08-07T08:09:13.4829383Z [0-7] 2020-08-07T08:09:13.481Z INFO webdriver: RESULT { 2020-08-07T08:09:13.4833620Z message: 'An element could not be located on the page using the given search parameters.' 2020-08-07T08:09:13.4842935Z } 2020-08-07T08:09:13.4979620Z [0-7] 2020-08-07T08:09:13.494Z DEBUG webdriverio: command click was called on an element ("~PlatformColor") that wasn't found, waiting for it... 2020-08-07T08:09:13.4983681Z [0-7] 2020-08-07T08:09:13.495Z INFO webdriver: COMMAND findElements("accessibility id", "PlatformColor") 2020-08-07T08:09:13.4985958Z [0-7] 2020-08-07T08:09:13.495Z INFO webdriver: [POST] http://127.0.0.1:4723/wd/hub/session/2151e415-2444-4c9d-a520-7afe5f1b3a64/elements 2020-08-07T08:09:13.4988522Z 2020-08-07T08:09:13.495Z INFO webdriver: DATA { using: 'accessibility id', value: 'PlatformColor' } 2020-08-07T08:09:14.5213136Z [0-7] 2020-08-07T08:09:14.513Z INFO webdriver: RESULT [] 2020-08-07T08:09:14.5219624Z [0-7] 2020-08-07T08:09:14.520Z INFO webdriver: COMMAND findElements("accessibility id", "PlatformColor") 2020-08-07T08:09:14.5224482Z [0-7] 2020-08-07T08:09:14.520Z INFO webdriver: [POST] http://127.0.0.1:4723/wd/hub/session/2151e415-2444-4c9d-a520-7afe5f1b3a64/elements 2020-08-07T08:09:14.5226429Z [0-7] 2020-08-07T08:09:14.520Z INFO webdriver: DATA { using: 'accessibility id', value: 'PlatformColor' } 2020-08-07T08:09:23.5036329Z [0-7] 2020-08-07T08:09:23.501Z INFO webdriver: COMMAND takeScreenshot() 2020-08-07T08:09:23.5050971Z [0-7] 2020-08-07T08:09:23.502Z INFO webdriver: [GET] http://127.0.0.1:4723/wd/hub/session/2151e415-2444-4c9d-a520-7afe5f1b3a64/screenshot 2020-08-07T08:09:30.6419291Z [0-7] 2020-08-07T08:09:30.628Z INFO webdriver: RESULT [ 2020-08-07T08:09:30.6421251Z { 2020-08-07T08:09:30.6422253Z 'element-6066-11e4-a52e-4f735466cecf': '42.524832.4.9518', 2020-08-07T08:09:30.6423101Z ELEMENT: '42.524832.4.9518' 2020-08-07T08:09:30.6425000Z } 2020-08-07T08:09:30.6425643Z ] 2020-08-07T08:09:30.7543089Z [0-7] 2020-08-07T08:09:30.752Z INFO webdriver: RESULT iVBORw0KGgoAAAANSUhEUgAAAyIAAAJ6CAIAAABbhWEnAAAAAXNSR0IArs4c6... 2020-08-07T08:09:30.7570362Z [0-7] Error in "undefined" 2020-08-07T08:09:30.7581522Z Error: Can't call click on element with selector "~PlatformColor" because element wasn't found
1.0
E2ETest: VisitAllPages instability - We're seeing an instability in the new VisitAllPages test. Log snippet from one of the failures: 2020-08-07T08:09:13.1994088Z [0-7] loading page PlatformColor 2020-08-07T08:09:13.1995129Z 2020-08-07T08:09:13.193Z INFO webdriver: COMMAND findElement("accessibility id", "PlatformColor") 2020-08-07T08:09:13.1996630Z 2020-08-07T08:09:13.198Z INFO webdriver: [POST] http://127.0.0.1:4723/wd/hub/session/2151e415-2444-4c9d-a520-7afe5f1b3a64/element 2020-08-07T08:09:13.1997759Z [0-7] 2020-08-07T08:09:13.198Z INFO webdriver: DATA { using: 'accessibility id', value: 'PlatformColor' } 2020-08-07T08:09:13.4829383Z [0-7] 2020-08-07T08:09:13.481Z INFO webdriver: RESULT { 2020-08-07T08:09:13.4833620Z message: 'An element could not be located on the page using the given search parameters.' 2020-08-07T08:09:13.4842935Z } 2020-08-07T08:09:13.4979620Z [0-7] 2020-08-07T08:09:13.494Z DEBUG webdriverio: command click was called on an element ("~PlatformColor") that wasn't found, waiting for it... 2020-08-07T08:09:13.4983681Z [0-7] 2020-08-07T08:09:13.495Z INFO webdriver: COMMAND findElements("accessibility id", "PlatformColor") 2020-08-07T08:09:13.4985958Z [0-7] 2020-08-07T08:09:13.495Z INFO webdriver: [POST] http://127.0.0.1:4723/wd/hub/session/2151e415-2444-4c9d-a520-7afe5f1b3a64/elements 2020-08-07T08:09:13.4988522Z 2020-08-07T08:09:13.495Z INFO webdriver: DATA { using: 'accessibility id', value: 'PlatformColor' } 2020-08-07T08:09:14.5213136Z [0-7] 2020-08-07T08:09:14.513Z INFO webdriver: RESULT [] 2020-08-07T08:09:14.5219624Z [0-7] 2020-08-07T08:09:14.520Z INFO webdriver: COMMAND findElements("accessibility id", "PlatformColor") 2020-08-07T08:09:14.5224482Z [0-7] 2020-08-07T08:09:14.520Z INFO webdriver: [POST] http://127.0.0.1:4723/wd/hub/session/2151e415-2444-4c9d-a520-7afe5f1b3a64/elements 2020-08-07T08:09:14.5226429Z [0-7] 2020-08-07T08:09:14.520Z INFO webdriver: DATA { using: 'accessibility id', value: 'PlatformColor' } 2020-08-07T08:09:23.5036329Z [0-7] 2020-08-07T08:09:23.501Z INFO webdriver: COMMAND takeScreenshot() 2020-08-07T08:09:23.5050971Z [0-7] 2020-08-07T08:09:23.502Z INFO webdriver: [GET] http://127.0.0.1:4723/wd/hub/session/2151e415-2444-4c9d-a520-7afe5f1b3a64/screenshot 2020-08-07T08:09:30.6419291Z [0-7] 2020-08-07T08:09:30.628Z INFO webdriver: RESULT [ 2020-08-07T08:09:30.6421251Z { 2020-08-07T08:09:30.6422253Z 'element-6066-11e4-a52e-4f735466cecf': '42.524832.4.9518', 2020-08-07T08:09:30.6423101Z ELEMENT: '42.524832.4.9518' 2020-08-07T08:09:30.6425000Z } 2020-08-07T08:09:30.6425643Z ] 2020-08-07T08:09:30.7543089Z [0-7] 2020-08-07T08:09:30.752Z INFO webdriver: RESULT iVBORw0KGgoAAAANSUhEUgAAAyIAAAJ6CAIAAABbhWEnAAAAAXNSR0IArs4c6... 2020-08-07T08:09:30.7570362Z [0-7] Error in "undefined" 2020-08-07T08:09:30.7581522Z Error: Can't call click on element with selector "~PlatformColor" because element wasn't found
non_process
visitallpages instability we re seeing an instability in the new visitallpages test log snippet from one of the failures loading page platformcolor info webdriver command findelement accessibility id platformcolor info webdriver info webdriver data using accessibility id value platformcolor info webdriver result message an element could not be located on the page using the given search parameters debug webdriverio command click was called on an element platformcolor that wasn t found waiting for it info webdriver command findelements accessibility id platformcolor info webdriver info webdriver data using accessibility id value platformcolor info webdriver result info webdriver command findelements accessibility id platformcolor info webdriver info webdriver data using accessibility id value platformcolor info webdriver command takescreenshot info webdriver info webdriver result element element info webdriver result error in undefined error can t call click on element with selector platformcolor because element wasn t found
0
290,159
32,037,271,526
IssuesEvent
2023-09-22 16:16:39
DimaMend/DemoCorp
https://api.github.com/repos/DimaMend/DemoCorp
opened
underscore-1.8.3.tgz: 1 vulnerabilities (highest severity is: 7.2)
Mend: dependency security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.8.3.tgz</b></p></summary> <p>JavaScript's functional programming helper library.</p> <p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.8.3.tgz">https://registry.npmjs.org/underscore/-/underscore-1.8.3.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/underscore/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/DimaMend/DemoCorp/commit/d6fe4e785c3f8a7550bf5a053ad7d7ac413979ae">d6fe4e785c3f8a7550bf5a053ad7d7ac413979ae</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (underscore version) | Remediation Possible** | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2021-23358](https://www.mend.io/vulnerability-database/CVE-2021-23358) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.2 | underscore-1.8.3.tgz | Direct | 1.12.1 | &#9989; | <p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2021-23358</summary> ### Vulnerable Library - <b>underscore-1.8.3.tgz</b></p> <p>JavaScript's functional programming helper library.</p> <p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.8.3.tgz">https://registry.npmjs.org/underscore/-/underscore-1.8.3.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/underscore/package.json</p> <p> Dependency Hierarchy: - :x: **underscore-1.8.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/DimaMend/DemoCorp/commit/d6fe4e785c3f8a7550bf5a053ad7d7ac413979ae">d6fe4e785c3f8a7550bf5a053ad7d7ac413979ae</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized. <p>Publish Date: 2021-03-29 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23358>CVE-2021-23358</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.2</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p> <p>Release Date: 2021-03-29</p> <p>Fix Resolution: 1.12.1</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation will be attempted for this issue. </details> *** <p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p>
True
underscore-1.8.3.tgz: 1 vulnerabilities (highest severity is: 7.2) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>underscore-1.8.3.tgz</b></p></summary> <p>JavaScript's functional programming helper library.</p> <p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.8.3.tgz">https://registry.npmjs.org/underscore/-/underscore-1.8.3.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/underscore/package.json</p> <p> <p>Found in HEAD commit: <a href="https://github.com/DimaMend/DemoCorp/commit/d6fe4e785c3f8a7550bf5a053ad7d7ac413979ae">d6fe4e785c3f8a7550bf5a053ad7d7ac413979ae</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (underscore version) | Remediation Possible** | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2021-23358](https://www.mend.io/vulnerability-database/CVE-2021-23358) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.2 | underscore-1.8.3.tgz | Direct | 1.12.1 | &#9989; | <p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2021-23358</summary> ### Vulnerable Library - <b>underscore-1.8.3.tgz</b></p> <p>JavaScript's functional programming helper library.</p> <p>Library home page: <a href="https://registry.npmjs.org/underscore/-/underscore-1.8.3.tgz">https://registry.npmjs.org/underscore/-/underscore-1.8.3.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/underscore/package.json</p> <p> Dependency Hierarchy: - :x: **underscore-1.8.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/DimaMend/DemoCorp/commit/d6fe4e785c3f8a7550bf5a053ad7d7ac413979ae">d6fe4e785c3f8a7550bf5a053ad7d7ac413979ae</a></p> <p>Found in base branch: <b>master</b></p> </p> <p></p> ### Vulnerability Details <p> The package underscore from 1.13.0-0 and before 1.13.0-2, from 1.3.2 and before 1.12.1 are vulnerable to Arbitrary Code Injection via the template function, particularly when a variable property is passed as an argument as it is not sanitized. <p>Publish Date: 2021-03-29 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23358>CVE-2021-23358</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.2</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23358</a></p> <p>Release Date: 2021-03-29</p> <p>Fix Resolution: 1.12.1</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation will be attempted for this issue. </details> *** <p>:rescue_worker_helmet:Automatic Remediation will be attempted for this issue.</p>
non_process
underscore tgz vulnerabilities highest severity is vulnerable library underscore tgz javascript s functional programming helper library library home page a href path to dependency file package json path to vulnerable library node modules underscore package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in underscore version remediation possible high underscore tgz direct in some cases remediation pr cannot be created automatically for a vulnerability despite the availability of remediation details cve vulnerable library underscore tgz javascript s functional programming helper library library home page a href path to dependency file package json path to vulnerable library node modules underscore package json dependency hierarchy x underscore tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package underscore from and before from and before are vulnerable to arbitrary code injection via the template function particularly when a variable property is passed as an argument as it is not sanitized publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation will be attempted for this issue rescue worker helmet automatic remediation will be attempted for this issue
0
13,068
15,397,125,055
IssuesEvent
2021-03-03 21:39:47
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Processing algorithms silently fail to load when psycopg2 is missing
Bug Processing
So [this thread](https://lists.osgeo.org/pipermail/qgis-developer/2021-March/063199.html) on the mailing list raised what I think is the same issue I encountered some time ago when setting up a new computer. All algorithms except native QGIS ones were silently failing to load. I was able to do the following to understand what was going on: ``` Python Console Use iface to access QGIS API interface or Type help(iface) for more info Security warning: typing commands from an untrusted source can harm your computer import os import traceback from qgis.PyQt.QtCore import Qt, QCoreApplication from qgis.PyQt.QtWidgets import QApplication from qgis.PyQt.QtGui import QCursor from qgis.utils import iface from qgis.core import (QgsMessageLog, QgsApplication, QgsMapLayer, QgsProcessingProvider, QgsProcessingAlgorithm, QgsProcessingException, QgsProcessingParameterDefinition, QgsProcessingOutputVectorLayer, QgsProcessingOutputRasterLayer, QgsProcessingOutputMapLayer, QgsProcessingOutputMultipleLayers, QgsProcessingFeedback, QgsRuntimeProfiler) import processing from processing.core.ProcessingConfig import ProcessingConfig from processing.gui.MessageBarProgress import MessageBarProgress from processing.gui.RenderingStyles import RenderingStyles from processing.gui.Postprocessing import handleAlgorithmResults from processing.gui.AlgorithmExecutor import execute from processing.script import ScriptUtils from processing.tools import dataobjects with QgsRuntimeProfiler.profile('Import GDAL Provider'): from processing.algs.gdal.GdalAlgorithmProvider import GdalAlgorithmProvider # NOQA QgsApplication.processingRegistry().providers() [<qgis._core.QgsProcessingProvider object at 0x7fcce901d700>, <qgis._core.QgsProcessingProvider object at 0x7fcce901d8b0>, <qgis2web.qgis2webProvider.qgis2webProvider object at 0x7fcceb368c10>, <QuickOSM.quick_osm_processing.provider.Provider object at 0x7fcceb400940>] from processing.algs.gdal.GdalAlgorithmProvider import GdalAlgorithmProvider Traceback (most recent call last): File "/usr/lib64/python3.8/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "/home/timlinux/dev/cpp/QGIS-Debug-Build/output/python/qgis/utils.py", line 792, in _import mod = _builtin_import(name, globals, locals, fromlist, level) File "/home/timlinux/dev/cpp/QGIS-Debug-Build/output/python/plugins/processing/algs/gdal/GdalAlgorithmProvider.py", line 33, in <module> from .GdalUtils import GdalUtils File "/home/timlinux/dev/cpp/QGIS-Debug-Build/output/python/qgis/utils.py", line 792, in _import mod = _builtin_import(name, globals, locals, fromlist, level) File "/home/timlinux/dev/cpp/QGIS-Debug-Build/output/python/plugins/processing/algs/gdal/GdalUtils.py", line 30, in <module> import psycopg2 File "/home/timlinux/dev/cpp/QGIS-Debug-Build/output/python/qgis/utils.py", line 792, in _import mod = _builtin_import(name, globals, locals, fromlist, level) ModuleNotFoundError: No module named 'psycopg2' ``` I think these 'with' profiling clauses in Processing.py: ``` with QgsRuntimeProfiler.profile('Import QGIS Provider'): from processing.algs.qgis.QgisAlgorithmProvider import QgisAlgorithmProvider # NOQA with QgsRuntimeProfiler.profile('Import GRASS Provider'): from processing.algs.grass7.Grass7AlgorithmProvider import Grass7AlgorithmProvider with QgsRuntimeProfiler.profile('Import GDAL Provider'): from processing.algs.gdal.GdalAlgorithmProvider import GdalAlgorithmProvider # NOQA with QgsRuntimeProfiler.profile('Import OTB Provider'): from processing.algs.otb.OtbAlgorithmProvider import OtbAlgorithmProvider # NOQA with QgsRuntimeProfiler.profile('Import SAGA Provider'): from processing.algs.saga.SagaAlgorithmProvider import SagaAlgorithmProvider # NOQA with QgsRuntimeProfiler.profile('Import Script Provider'): from processing.script.ScriptAlgorithmProvider import ScriptAlgorithmProvider # NOQA ``` Hide the python exception that gets raised if a dependency is missing.... ------- Installing python-psycopg2 fixed the problem. Perhaps we can trap this error some how and surface it since it blocks lags from loading that do not even need psycopg2. CC @nyalldawson to capture our off list thread somewhere more accessible
1.0
Processing algorithms silently fail to load when psycopg2 is missing - So [this thread](https://lists.osgeo.org/pipermail/qgis-developer/2021-March/063199.html) on the mailing list raised what I think is the same issue I encountered some time ago when setting up a new computer. All algorithms except native QGIS ones were silently failing to load. I was able to do the following to understand what was going on: ``` Python Console Use iface to access QGIS API interface or Type help(iface) for more info Security warning: typing commands from an untrusted source can harm your computer import os import traceback from qgis.PyQt.QtCore import Qt, QCoreApplication from qgis.PyQt.QtWidgets import QApplication from qgis.PyQt.QtGui import QCursor from qgis.utils import iface from qgis.core import (QgsMessageLog, QgsApplication, QgsMapLayer, QgsProcessingProvider, QgsProcessingAlgorithm, QgsProcessingException, QgsProcessingParameterDefinition, QgsProcessingOutputVectorLayer, QgsProcessingOutputRasterLayer, QgsProcessingOutputMapLayer, QgsProcessingOutputMultipleLayers, QgsProcessingFeedback, QgsRuntimeProfiler) import processing from processing.core.ProcessingConfig import ProcessingConfig from processing.gui.MessageBarProgress import MessageBarProgress from processing.gui.RenderingStyles import RenderingStyles from processing.gui.Postprocessing import handleAlgorithmResults from processing.gui.AlgorithmExecutor import execute from processing.script import ScriptUtils from processing.tools import dataobjects with QgsRuntimeProfiler.profile('Import GDAL Provider'): from processing.algs.gdal.GdalAlgorithmProvider import GdalAlgorithmProvider # NOQA QgsApplication.processingRegistry().providers() [<qgis._core.QgsProcessingProvider object at 0x7fcce901d700>, <qgis._core.QgsProcessingProvider object at 0x7fcce901d8b0>, <qgis2web.qgis2webProvider.qgis2webProvider object at 0x7fcceb368c10>, <QuickOSM.quick_osm_processing.provider.Provider object at 0x7fcceb400940>] from processing.algs.gdal.GdalAlgorithmProvider import GdalAlgorithmProvider Traceback (most recent call last): File "/usr/lib64/python3.8/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "/home/timlinux/dev/cpp/QGIS-Debug-Build/output/python/qgis/utils.py", line 792, in _import mod = _builtin_import(name, globals, locals, fromlist, level) File "/home/timlinux/dev/cpp/QGIS-Debug-Build/output/python/plugins/processing/algs/gdal/GdalAlgorithmProvider.py", line 33, in <module> from .GdalUtils import GdalUtils File "/home/timlinux/dev/cpp/QGIS-Debug-Build/output/python/qgis/utils.py", line 792, in _import mod = _builtin_import(name, globals, locals, fromlist, level) File "/home/timlinux/dev/cpp/QGIS-Debug-Build/output/python/plugins/processing/algs/gdal/GdalUtils.py", line 30, in <module> import psycopg2 File "/home/timlinux/dev/cpp/QGIS-Debug-Build/output/python/qgis/utils.py", line 792, in _import mod = _builtin_import(name, globals, locals, fromlist, level) ModuleNotFoundError: No module named 'psycopg2' ``` I think these 'with' profiling clauses in Processing.py: ``` with QgsRuntimeProfiler.profile('Import QGIS Provider'): from processing.algs.qgis.QgisAlgorithmProvider import QgisAlgorithmProvider # NOQA with QgsRuntimeProfiler.profile('Import GRASS Provider'): from processing.algs.grass7.Grass7AlgorithmProvider import Grass7AlgorithmProvider with QgsRuntimeProfiler.profile('Import GDAL Provider'): from processing.algs.gdal.GdalAlgorithmProvider import GdalAlgorithmProvider # NOQA with QgsRuntimeProfiler.profile('Import OTB Provider'): from processing.algs.otb.OtbAlgorithmProvider import OtbAlgorithmProvider # NOQA with QgsRuntimeProfiler.profile('Import SAGA Provider'): from processing.algs.saga.SagaAlgorithmProvider import SagaAlgorithmProvider # NOQA with QgsRuntimeProfiler.profile('Import Script Provider'): from processing.script.ScriptAlgorithmProvider import ScriptAlgorithmProvider # NOQA ``` Hide the python exception that gets raised if a dependency is missing.... ------- Installing python-psycopg2 fixed the problem. Perhaps we can trap this error some how and surface it since it blocks lags from loading that do not even need psycopg2. CC @nyalldawson to capture our off list thread somewhere more accessible
process
processing algorithms silently fail to load when is missing so on the mailing list raised what i think is the same issue i encountered some time ago when setting up a new computer all algorithms except native qgis ones were silently failing to load i was able to do the following to understand what was going on python console use iface to access qgis api interface or type help iface for more info security warning typing commands from an untrusted source can harm your computer import os import traceback from qgis pyqt qtcore import qt qcoreapplication from qgis pyqt qtwidgets import qapplication from qgis pyqt qtgui import qcursor from qgis utils import iface from qgis core import qgsmessagelog qgsapplication qgsmaplayer qgsprocessingprovider qgsprocessingalgorithm qgsprocessingexception qgsprocessingparameterdefinition qgsprocessingoutputvectorlayer qgsprocessingoutputrasterlayer qgsprocessingoutputmaplayer qgsprocessingoutputmultiplelayers qgsprocessingfeedback qgsruntimeprofiler import processing from processing core processingconfig import processingconfig from processing gui messagebarprogress import messagebarprogress from processing gui renderingstyles import renderingstyles from processing gui postprocessing import handlealgorithmresults from processing gui algorithmexecutor import execute from processing script import scriptutils from processing tools import dataobjects with qgsruntimeprofiler profile import gdal provider from processing algs gdal gdalalgorithmprovider import gdalalgorithmprovider noqa qgsapplication processingregistry providers from processing algs gdal gdalalgorithmprovider import gdalalgorithmprovider traceback most recent call last file usr code py line in runcode exec code self locals file line in file home timlinux dev cpp qgis debug build output python qgis utils py line in import mod builtin import name globals locals fromlist level file home timlinux dev cpp qgis debug build output python plugins processing algs gdal gdalalgorithmprovider py line in from gdalutils import gdalutils file home timlinux dev cpp qgis debug build output python qgis utils py line in import mod builtin import name globals locals fromlist level file home timlinux dev cpp qgis debug build output python plugins processing algs gdal gdalutils py line in import file home timlinux dev cpp qgis debug build output python qgis utils py line in import mod builtin import name globals locals fromlist level modulenotfounderror no module named i think these with profiling clauses in processing py with qgsruntimeprofiler profile import qgis provider from processing algs qgis qgisalgorithmprovider import qgisalgorithmprovider noqa with qgsruntimeprofiler profile import grass provider from processing algs import with qgsruntimeprofiler profile import gdal provider from processing algs gdal gdalalgorithmprovider import gdalalgorithmprovider noqa with qgsruntimeprofiler profile import otb provider from processing algs otb otbalgorithmprovider import otbalgorithmprovider noqa with qgsruntimeprofiler profile import saga provider from processing algs saga sagaalgorithmprovider import sagaalgorithmprovider noqa with qgsruntimeprofiler profile import script provider from processing script scriptalgorithmprovider import scriptalgorithmprovider noqa hide the python exception that gets raised if a dependency is missing installing python fixed the problem perhaps we can trap this error some how and surface it since it blocks lags from loading that do not even need cc nyalldawson to capture our off list thread somewhere more accessible
1
6,178
9,086,886,748
IssuesEvent
2019-02-18 12:13:51
FACK1/ReservationSystem
https://api.github.com/repos/FACK1/ReservationSystem
reopened
login Authentication
inProcess technical
- [x] check if the user logged then redirect the user to the book event page.
1.0
login Authentication - - [x] check if the user logged then redirect the user to the book event page.
process
login authentication check if the user logged then redirect the user to the book event page
1
16,795
22,044,126,588
IssuesEvent
2022-05-29 20:10:56
bow-simulation/virtualbow
https://api.github.com/repos/bow-simulation/virtualbow
closed
Update dependencies
area: software process type: improvement
* Eigen: No need for using the master branch anymore since 3.9.0 * Json: Use `nlohmann::ordered_json` in combination with the new serialization macros * Boost, Catch: No particular reason, but easy to update * Qt: Stay on 5.15 as long as Qt 6 is not widely packaged on Linux yet * NLopt: Not needed anymore
1.0
Update dependencies - * Eigen: No need for using the master branch anymore since 3.9.0 * Json: Use `nlohmann::ordered_json` in combination with the new serialization macros * Boost, Catch: No particular reason, but easy to update * Qt: Stay on 5.15 as long as Qt 6 is not widely packaged on Linux yet * NLopt: Not needed anymore
process
update dependencies eigen no need for using the master branch anymore since json use nlohmann ordered json in combination with the new serialization macros boost catch no particular reason but easy to update qt stay on as long as qt is not widely packaged on linux yet nlopt not needed anymore
1
4,346
7,247,360,450
IssuesEvent
2018-02-15 02:19:02
amigosdapoli/donation-system
https://api.github.com/repos/amigosdapoli/donation-system
closed
System admin is able to download the list of donations through admin interface
admin-processes
Single list with columns differentiating recurring from one-off and payment methods as well
1.0
System admin is able to download the list of donations through admin interface - Single list with columns differentiating recurring from one-off and payment methods as well
process
system admin is able to download the list of donations through admin interface single list with columns differentiating recurring from one off and payment methods as well
1
1,863
4,691,155,569
IssuesEvent
2016-10-11 09:32:07
CERNDocumentServer/cds
https://api.github.com/repos/CERNDocumentServer/cds
closed
Process: initiate processes per video
avc_processing review
For each video we should be able to initiate process with the payload ``` json { "bucket_id": "xxx", "depid": "xxx", ... } ```
1.0
Process: initiate processes per video - For each video we should be able to initiate process with the payload ``` json { "bucket_id": "xxx", "depid": "xxx", ... } ```
process
process initiate processes per video for each video we should be able to initiate process with the payload json bucket id xxx depid xxx
1
13,471
15,962,950,830
IssuesEvent
2021-04-16 02:39:08
rubberduck-vba/Rubberduck
https://api.github.com/repos/rubberduck-vba/Rubberduck
closed
Rubberduck doesn't seem to load twinBASIC types?
bug typeinfo-processing
## ⏩ To Summarize **A quick description of the problem:** This could be a bug in Rubberduck's library loader, or just something weird with my twinBASIC library or my computer, or it could be that twinBASIC type libraries are [currently?] structured in a way that somehow sets them apart from other libraries, but it appears Rubberduck is currently unable to load/resolve a twinBASIC library. CC @WaynePhillipsEA twinBASIC version: 0.9.1602 ## 🔁 To Reproduce **Steps to reproduce the erroneous behavior:** 1. Build a twinBASIC ActiveX type library with a `Class1` class and a `@Description` annotation over the class and at least one public member. 2. Create a new VBA project, reference the built twinBASIC type library; object browser contains Class1 and shows the docstring. 3. Add a new procedure, create a `New Class1` instance, call any of its exposed members, and parse; 4. Place the cursor over the member calls; Rubberduck's context-sensitive selection is... consistent with the resolver failures found in the logs. ``` 2021-04-14 22:39:32.0501;WARN-2.5.1.0;Rubberduck.Parsing.VBA.ReferenceManagement.CompilationPasses.TypeAnnotationPass;Failed to resolve type IAppContext; 2021-04-14 22:39:32.0501;WARN-2.5.1.0;Rubberduck.Parsing.VBA.ReferenceManagement.IdentifierReferenceResolver;Type Context: Failed to resolve IAppContext. Binding as much as we can.; ``` ![Rubberduck commandbar hinting at the non-resolution of the library types](https://user-images.githubusercontent.com/5751684/114809244-ce22fa80-9d77-11eb-9140-38b6c8601403.png) ![VBE Object Browser showing the vbmvvm library loaded](https://user-images.githubusercontent.com/5751684/114808710-cb73d580-9d76-11eb-80b5-d8746646e50c.png) ## ⏹️ To Call it Fixed **A clear description of the correct/expected behavior:** Rubberduck should be able to load twinBASIC type libraries like any other COM type library, and display the source library and docstring for the current selection like it does for any other early-bound reference.
1.0
Rubberduck doesn't seem to load twinBASIC types? - ## ⏩ To Summarize **A quick description of the problem:** This could be a bug in Rubberduck's library loader, or just something weird with my twinBASIC library or my computer, or it could be that twinBASIC type libraries are [currently?] structured in a way that somehow sets them apart from other libraries, but it appears Rubberduck is currently unable to load/resolve a twinBASIC library. CC @WaynePhillipsEA twinBASIC version: 0.9.1602 ## 🔁 To Reproduce **Steps to reproduce the erroneous behavior:** 1. Build a twinBASIC ActiveX type library with a `Class1` class and a `@Description` annotation over the class and at least one public member. 2. Create a new VBA project, reference the built twinBASIC type library; object browser contains Class1 and shows the docstring. 3. Add a new procedure, create a `New Class1` instance, call any of its exposed members, and parse; 4. Place the cursor over the member calls; Rubberduck's context-sensitive selection is... consistent with the resolver failures found in the logs. ``` 2021-04-14 22:39:32.0501;WARN-2.5.1.0;Rubberduck.Parsing.VBA.ReferenceManagement.CompilationPasses.TypeAnnotationPass;Failed to resolve type IAppContext; 2021-04-14 22:39:32.0501;WARN-2.5.1.0;Rubberduck.Parsing.VBA.ReferenceManagement.IdentifierReferenceResolver;Type Context: Failed to resolve IAppContext. Binding as much as we can.; ``` ![Rubberduck commandbar hinting at the non-resolution of the library types](https://user-images.githubusercontent.com/5751684/114809244-ce22fa80-9d77-11eb-9140-38b6c8601403.png) ![VBE Object Browser showing the vbmvvm library loaded](https://user-images.githubusercontent.com/5751684/114808710-cb73d580-9d76-11eb-80b5-d8746646e50c.png) ## ⏹️ To Call it Fixed **A clear description of the correct/expected behavior:** Rubberduck should be able to load twinBASIC type libraries like any other COM type library, and display the source library and docstring for the current selection like it does for any other early-bound reference.
process
rubberduck doesn t seem to load twinbasic types ⏩ to summarize a quick description of the problem this could be a bug in rubberduck s library loader or just something weird with my twinbasic library or my computer or it could be that twinbasic type libraries are structured in a way that somehow sets them apart from other libraries but it appears rubberduck is currently unable to load resolve a twinbasic library cc waynephillipsea twinbasic version 🔁 to reproduce steps to reproduce the erroneous behavior build a twinbasic activex type library with a class and a description annotation over the class and at least one public member create a new vba project reference the built twinbasic type library object browser contains and shows the docstring add a new procedure create a new instance call any of its exposed members and parse place the cursor over the member calls rubberduck s context sensitive selection is consistent with the resolver failures found in the logs warn rubberduck parsing vba referencemanagement compilationpasses typeannotationpass failed to resolve type iappcontext warn rubberduck parsing vba referencemanagement identifierreferenceresolver type context failed to resolve iappcontext binding as much as we can ⏹️ to call it fixed a clear description of the correct expected behavior rubberduck should be able to load twinbasic type libraries like any other com type library and display the source library and docstring for the current selection like it does for any other early bound reference
1
16
2,496,242,720
IssuesEvent
2015-01-06 18:05:05
vivo-isf/vivo-isf-ontology
https://api.github.com/repos/vivo-isf/vivo-isf-ontology
closed
memory consolidation
biological_process imported
_From [fcold...@eagle-i.org](https://code.google.com/u/113677139039624182507/) on November 30, 2012 13:33:25_ \<b>**** Use the form below to request a new term ****</b> \<b>**** Scroll down to see a term request example ****</b> &#13; \<b>Please indicate the label for the proposed term:</b> memory consolidation&#13; &#13; \<b>Please provide a textual definition (with source):</b> "Memory consolidation is a category of processes that stabilize a memory trace after the initial acquisition." From: \<a href="http://en.wikipedia.org/wiki/Memory_consolidation" rel="nofollow">http://en.wikipedia.org/wiki/Memory_consolidation</a>&#13; &#13; Term is present in the Neuroscience Information Framework Standard Ontology: \<a href="http://bioportal.bioontology.org/ontologies/1084?p=terms&amp;conceptid=http&#37;3A&#37;2F&#37;2Fontology.neuinfo.org&#37;2FNIF&#37;2FFunction&#37;2FNIF-Function.owl&#37;23birnlex_1816" rel="nofollow">http://bioportal.bioontology.org/ontologies/1084?p=terms&amp;conceptid=http&#37;3A&#37;2F&#37;2Fontology.neuinfo.org&#37;2FNIF&#37;2FFunction&#37;2FNIF-Function.owl&#37;23birnlex_1816</a>&#13; &#13; \<b>Please add an example of usage for proposed term:</b> To use in Biological Process Studied for model organisms&#13; &#13; &#13; \<b>Please provide any additional optional information below. (e.g. desired</b> \<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b> &#13; \<b>[ ] Instrument</b> [X] Biological process&#13; \<b>[ ] Disease</b> \<b>[ ] Human studies</b> \<b>[ ] Instrument</b> \<b>[ ] Organism</b> \<b>[ ] Reagent</b> \<b>[ ] Software</b> \<b>[ ] Technique</b> \<b>[ ] Organization</b> &#13; \<b>Additional info:</b> &#13; &#13; &#13; &#13; \<b>*** Term request example ****</b> &#13; \<b>Please indicate the label for the proposed term: four-terminal resistance</b> \<b>sensor</b> &#13; &#13; Please provide a textual definition (with source): "Four-terminal&#13; \<b>resistance sensors are electrical impedance measuring instruments that use</b> \<b>separate pairs of current-carrying and voltage-sensing electrodes to make</b> \<b>accurate measurements that can be used to compute a material's electrical</b> resistance." \<a href="http://en.wikipedia.org/wiki/Four-terminal_sensing" rel="nofollow">http://en.wikipedia.org/wiki/Four-terminal_sensing</a>&#13; &#13; &#13; \<b>Please add an example of usage for proposed term: Measuring the inherent</b> \<b>(per square) resistance of doped silicon.</b> &#13; &#13; \<b>Please provide any additional optional information below. (e.g. desired</b> \<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b> &#13; \<b>[X] Instrument</b> \<b>[ ] Biological process</b> \<b>[ ] Disease</b> \<b>[ ] Human studies</b> \<b>[ ] Instrument</b> \<b>[ ] Organism</b> \<b>[ ] Reagent</b> \<b>[ ] Software</b> \<b>[ ] Technique</b> \<b>[ ] Organization</b> &#13; \<b>Additional info: AKA - 4T sensors, 4-wire sensor, or 4-point probe</b> _Original issue: http://code.google.com/p/eagle-i/issues/detail?id=171_
1.0
memory consolidation - _From [fcold...@eagle-i.org](https://code.google.com/u/113677139039624182507/) on November 30, 2012 13:33:25_ \<b>**** Use the form below to request a new term ****</b> \<b>**** Scroll down to see a term request example ****</b> &#13; \<b>Please indicate the label for the proposed term:</b> memory consolidation&#13; &#13; \<b>Please provide a textual definition (with source):</b> "Memory consolidation is a category of processes that stabilize a memory trace after the initial acquisition." From: \<a href="http://en.wikipedia.org/wiki/Memory_consolidation" rel="nofollow">http://en.wikipedia.org/wiki/Memory_consolidation</a>&#13; &#13; Term is present in the Neuroscience Information Framework Standard Ontology: \<a href="http://bioportal.bioontology.org/ontologies/1084?p=terms&amp;conceptid=http&#37;3A&#37;2F&#37;2Fontology.neuinfo.org&#37;2FNIF&#37;2FFunction&#37;2FNIF-Function.owl&#37;23birnlex_1816" rel="nofollow">http://bioportal.bioontology.org/ontologies/1084?p=terms&amp;conceptid=http&#37;3A&#37;2F&#37;2Fontology.neuinfo.org&#37;2FNIF&#37;2FFunction&#37;2FNIF-Function.owl&#37;23birnlex_1816</a>&#13; &#13; \<b>Please add an example of usage for proposed term:</b> To use in Biological Process Studied for model organisms&#13; &#13; &#13; \<b>Please provide any additional optional information below. (e.g. desired</b> \<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b> &#13; \<b>[ ] Instrument</b> [X] Biological process&#13; \<b>[ ] Disease</b> \<b>[ ] Human studies</b> \<b>[ ] Instrument</b> \<b>[ ] Organism</b> \<b>[ ] Reagent</b> \<b>[ ] Software</b> \<b>[ ] Technique</b> \<b>[ ] Organization</b> &#13; \<b>Additional info:</b> &#13; &#13; &#13; &#13; \<b>*** Term request example ****</b> &#13; \<b>Please indicate the label for the proposed term: four-terminal resistance</b> \<b>sensor</b> &#13; &#13; Please provide a textual definition (with source): "Four-terminal&#13; \<b>resistance sensors are electrical impedance measuring instruments that use</b> \<b>separate pairs of current-carrying and voltage-sensing electrodes to make</b> \<b>accurate measurements that can be used to compute a material's electrical</b> resistance." \<a href="http://en.wikipedia.org/wiki/Four-terminal_sensing" rel="nofollow">http://en.wikipedia.org/wiki/Four-terminal_sensing</a>&#13; &#13; &#13; \<b>Please add an example of usage for proposed term: Measuring the inherent</b> \<b>(per square) resistance of doped silicon.</b> &#13; &#13; \<b>Please provide any additional optional information below. (e.g. desired</b> \<b>asserted SuperClass in ERO hierarchy or Reference Branch)</b> &#13; \<b>[X] Instrument</b> \<b>[ ] Biological process</b> \<b>[ ] Disease</b> \<b>[ ] Human studies</b> \<b>[ ] Instrument</b> \<b>[ ] Organism</b> \<b>[ ] Reagent</b> \<b>[ ] Software</b> \<b>[ ] Technique</b> \<b>[ ] Organization</b> &#13; \<b>Additional info: AKA - 4T sensors, 4-wire sensor, or 4-point probe</b> _Original issue: http://code.google.com/p/eagle-i/issues/detail?id=171_
process
memory consolidation from on november use the form below to request a new term scroll down to see a term request example please indicate the label for the proposed term memory consolidation please provide a textual definition with source memory consolidation is a category of processes that stabilize a memory trace after the initial acquisition from term is present in the neuroscience information framework standard ontology please add an example of usage for proposed term to use in biological process studied for model organisms please provide any additional optional information below e g desired asserted superclass in ero hierarchy or reference branch instrument biological process disease human studies instrument organism reagent software technique organization additional info term request example please indicate the label for the proposed term four terminal resistance sensor please provide a textual definition with source four terminal resistance sensors are electrical impedance measuring instruments that use separate pairs of current carrying and voltage sensing electrodes to make accurate measurements that can be used to compute a material s electrical resistance please add an example of usage for proposed term measuring the inherent per square resistance of doped silicon please provide any additional optional information below e g desired asserted superclass in ero hierarchy or reference branch instrument biological process disease human studies instrument organism reagent software technique organization additional info aka sensors wire sensor or point probe original issue
1
11,090
13,931,788,719
IssuesEvent
2020-10-22 06:05:07
sebastianbergmann/phpunit
https://api.github.com/repos/sebastianbergmann/phpunit
closed
Process isolation under phpdbg throws exceptions
feature/process-isolation type/bug
<!-- - Please do not report an issue for a version of PHPUnit that is no longer supported. A list of currently supported versions of PHPUnit is available at https://phpunit.de/supported-versions.html. - Please do not report an issue if you are using a version of PHP that is not supported by the version of PHPUnit you are using. A list that shows which version of PHP is supported by which version of PHPUnit is available at https://phpunit.de/supported-versions.html. - Please fill in this template according to your issue. - Please keep the table shown below at the top of your issue. - Please include the output of "composer info | sort" if you installed PHPUnit using Composer. - Please post code as text (using proper markup). Do not post screenshots of code. - Visit https://phpunit.de/support.html if you are looking for support. - Please remove this comment before submitting your issue. --> | Q | A | --------------------| --------------- | PHPUnit version | 8.4.3 | PHP version | 7.4.0 (phpdbg SAPI) | Installation Method | Composer #### Summary Use of process isolation throws exceptions under `phpdbg`. At first I thought this was a regression of #3772, but it seems to be a separate cause. I think was able to track the bug down to [this line in `TestCaseMethod.tpl`](https://github.com/sebastianbergmann/phpunit/blob/8.4.3/src/Util/PHP/Template/TestCaseMethod.tpl#L66) (there's also similar code in [`TestCaseClass.tpl`](https://github.com/sebastianbergmann/phpunit/blob/8.4.3/src/Util/PHP/Template/TestCaseClass.tpl#L63)). Both these lines attempt to read from `STDOUT`, without first checking if it is a readable stream. If I put in a debug statement using `stream_get_meta_data(STDOUT)`, you can see that `STDOUT` is not readable: ``` array(9) { ["timed_out"]=> bool(false) ["blocked"]=> bool(true) ["eof"]=> bool(false) ["wrapper_type"]=> string(3) "PHP" ["stream_type"]=> string(5) "STDIO" ["mode"]=> string(2) "wb" ["unread_bytes"]=> int(0) ["seekable"]=> bool(false) ["uri"]=> string(12) "php://stdout" } ``` See also: - [ezzatron/phpunit-repro on the phpdbg-process-isolation branch](https://github.com/ezzatron/phpunit-repro/tree/phpdbg-process-isolation) for repro code - An [example CI failure](https://travis-ci.org/ezzatron/phpunit-repro/jobs/618814031) under Travis - An [example CI failure](https://github.com/ezzatron/phpunit-repro/runs/326747230) under GitHub Actions #### Current behavior Any test that uses process isolation under `phpdbg` does not run, and instead throws an exception similar to: `PHPUnit\Framework\Exception: Notice: stream_get_contents(): read of 8192 bytes failed with errno=9 Bad file descriptor in Standard input code on line 336`. #### How to reproduce Run this test under `phpdbg`: ```php <?php use PHPUnit\Framework\TestCase; class ReproTest extends TestCase { /** * @doesNotPerformAssertions * @runInSeparateProcess */ public function testProcessIsolation() { } } ``` And see this output: ``` $ phpdbg -qrr vendor/bin/phpunit PHPUnit 8.4.3 by Sebastian Bergmann and contributors. E 1 / 1 (100%) Time: 121 ms, Memory: 6.00 MB There was 1 error: 1) ReproTest::testProcessIsolation PHPUnit\Framework\Exception: Notice: stream_get_contents(): read of 8192 bytes failed with errno=9 Bad file descriptor in Standard input code on line 336 ERRORS! Tests: 1, Assertions: 0, Errors: 1. ``` #### Expected behavior Process isolation should work for `phpdbg` in the same way as it does for other PHP SAPIs. #### Output of `composer info | sort` ``` doctrine/instantiator 1.3.0 A small, lightweight utility to instantiate objects in PHP without invoking their constructors myclabs/deep-copy 1.9.3 Create deep copies (clones) of your objects phar-io/manifest 1.0.3 Component for reading phar.io manifest information from a PHP Archive (PHAR) phar-io/version 2.0.1 Library for handling version information and constraints phpdocumentor/reflection-common 2.0.0 Common reflection classes used by phpdocumentor to reflect the code structure phpdocumentor/reflection-docblock 4.3.2 With this component, a library can provide support for annotations via DocBlocks or otherwise retrieve information that is embedded in a DocBlock. phpdocumentor/type-resolver 1.0.1 A PSR-5 based resolver of Class names, Types and Structural Element Names phpspec/prophecy 1.9.0 Highly opinionated mocking framework for PHP 5.3+ phpunit/php-code-coverage 7.0.10 Library that provides collection, processing, and rendering functionality for PHP code coverage information. phpunit/php-file-iterator 2.0.2 FilterIterator implementation that filters files based on a list of suffixes. phpunit/php-text-template 1.2.1 Simple template engine. phpunit/php-timer 2.1.2 Utility class for timing phpunit/php-token-stream 3.1.1 Wrapper around PHP's tokenizer extension. phpunit/phpunit 8.4.3 The PHP Unit Testing framework. sebastian/code-unit-reverse-lookup 1.0.1 Looks up which function or method a line of code belongs to sebastian/comparator 3.0.2 Provides the functionality to compare PHP values for equality sebastian/diff 3.0.2 Diff implementation sebastian/environment 4.2.3 Provides functionality to handle HHVM/PHP environments sebastian/exporter 3.1.2 Provides the functionality to export PHP variables for visualization sebastian/global-state 3.0.0 Snapshotting of global state sebastian/object-enumerator 3.0.3 Traverses array structures and object graphs to enumerate all referenced objects sebastian/object-reflector 1.1.1 Allows reflection of object attributes, including inherited and non-public ones sebastian/recursion-context 3.0.0 Provides functionality to recursively process PHP variables sebastian/resource-operations 2.0.1 Provides a list of PHP built-in functions that operate on resources sebastian/type 1.1.3 Collection of value objects that represent the types of the PHP type system sebastian/version 2.0.1 Library that helps with managing the version number of Git-hosted PHP projects symfony/polyfill-ctype v1.13.0 Symfony polyfill for ctype functions theseer/tokenizer 1.1.3 A small library for converting tokenized PHP source code into XML and potentially other formats webmozart/assert 1.6.0 Assertions to validate method input/output with nice error messages. ```
1.0
Process isolation under phpdbg throws exceptions - <!-- - Please do not report an issue for a version of PHPUnit that is no longer supported. A list of currently supported versions of PHPUnit is available at https://phpunit.de/supported-versions.html. - Please do not report an issue if you are using a version of PHP that is not supported by the version of PHPUnit you are using. A list that shows which version of PHP is supported by which version of PHPUnit is available at https://phpunit.de/supported-versions.html. - Please fill in this template according to your issue. - Please keep the table shown below at the top of your issue. - Please include the output of "composer info | sort" if you installed PHPUnit using Composer. - Please post code as text (using proper markup). Do not post screenshots of code. - Visit https://phpunit.de/support.html if you are looking for support. - Please remove this comment before submitting your issue. --> | Q | A | --------------------| --------------- | PHPUnit version | 8.4.3 | PHP version | 7.4.0 (phpdbg SAPI) | Installation Method | Composer #### Summary Use of process isolation throws exceptions under `phpdbg`. At first I thought this was a regression of #3772, but it seems to be a separate cause. I think was able to track the bug down to [this line in `TestCaseMethod.tpl`](https://github.com/sebastianbergmann/phpunit/blob/8.4.3/src/Util/PHP/Template/TestCaseMethod.tpl#L66) (there's also similar code in [`TestCaseClass.tpl`](https://github.com/sebastianbergmann/phpunit/blob/8.4.3/src/Util/PHP/Template/TestCaseClass.tpl#L63)). Both these lines attempt to read from `STDOUT`, without first checking if it is a readable stream. If I put in a debug statement using `stream_get_meta_data(STDOUT)`, you can see that `STDOUT` is not readable: ``` array(9) { ["timed_out"]=> bool(false) ["blocked"]=> bool(true) ["eof"]=> bool(false) ["wrapper_type"]=> string(3) "PHP" ["stream_type"]=> string(5) "STDIO" ["mode"]=> string(2) "wb" ["unread_bytes"]=> int(0) ["seekable"]=> bool(false) ["uri"]=> string(12) "php://stdout" } ``` See also: - [ezzatron/phpunit-repro on the phpdbg-process-isolation branch](https://github.com/ezzatron/phpunit-repro/tree/phpdbg-process-isolation) for repro code - An [example CI failure](https://travis-ci.org/ezzatron/phpunit-repro/jobs/618814031) under Travis - An [example CI failure](https://github.com/ezzatron/phpunit-repro/runs/326747230) under GitHub Actions #### Current behavior Any test that uses process isolation under `phpdbg` does not run, and instead throws an exception similar to: `PHPUnit\Framework\Exception: Notice: stream_get_contents(): read of 8192 bytes failed with errno=9 Bad file descriptor in Standard input code on line 336`. #### How to reproduce Run this test under `phpdbg`: ```php <?php use PHPUnit\Framework\TestCase; class ReproTest extends TestCase { /** * @doesNotPerformAssertions * @runInSeparateProcess */ public function testProcessIsolation() { } } ``` And see this output: ``` $ phpdbg -qrr vendor/bin/phpunit PHPUnit 8.4.3 by Sebastian Bergmann and contributors. E 1 / 1 (100%) Time: 121 ms, Memory: 6.00 MB There was 1 error: 1) ReproTest::testProcessIsolation PHPUnit\Framework\Exception: Notice: stream_get_contents(): read of 8192 bytes failed with errno=9 Bad file descriptor in Standard input code on line 336 ERRORS! Tests: 1, Assertions: 0, Errors: 1. ``` #### Expected behavior Process isolation should work for `phpdbg` in the same way as it does for other PHP SAPIs. #### Output of `composer info | sort` ``` doctrine/instantiator 1.3.0 A small, lightweight utility to instantiate objects in PHP without invoking their constructors myclabs/deep-copy 1.9.3 Create deep copies (clones) of your objects phar-io/manifest 1.0.3 Component for reading phar.io manifest information from a PHP Archive (PHAR) phar-io/version 2.0.1 Library for handling version information and constraints phpdocumentor/reflection-common 2.0.0 Common reflection classes used by phpdocumentor to reflect the code structure phpdocumentor/reflection-docblock 4.3.2 With this component, a library can provide support for annotations via DocBlocks or otherwise retrieve information that is embedded in a DocBlock. phpdocumentor/type-resolver 1.0.1 A PSR-5 based resolver of Class names, Types and Structural Element Names phpspec/prophecy 1.9.0 Highly opinionated mocking framework for PHP 5.3+ phpunit/php-code-coverage 7.0.10 Library that provides collection, processing, and rendering functionality for PHP code coverage information. phpunit/php-file-iterator 2.0.2 FilterIterator implementation that filters files based on a list of suffixes. phpunit/php-text-template 1.2.1 Simple template engine. phpunit/php-timer 2.1.2 Utility class for timing phpunit/php-token-stream 3.1.1 Wrapper around PHP's tokenizer extension. phpunit/phpunit 8.4.3 The PHP Unit Testing framework. sebastian/code-unit-reverse-lookup 1.0.1 Looks up which function or method a line of code belongs to sebastian/comparator 3.0.2 Provides the functionality to compare PHP values for equality sebastian/diff 3.0.2 Diff implementation sebastian/environment 4.2.3 Provides functionality to handle HHVM/PHP environments sebastian/exporter 3.1.2 Provides the functionality to export PHP variables for visualization sebastian/global-state 3.0.0 Snapshotting of global state sebastian/object-enumerator 3.0.3 Traverses array structures and object graphs to enumerate all referenced objects sebastian/object-reflector 1.1.1 Allows reflection of object attributes, including inherited and non-public ones sebastian/recursion-context 3.0.0 Provides functionality to recursively process PHP variables sebastian/resource-operations 2.0.1 Provides a list of PHP built-in functions that operate on resources sebastian/type 1.1.3 Collection of value objects that represent the types of the PHP type system sebastian/version 2.0.1 Library that helps with managing the version number of Git-hosted PHP projects symfony/polyfill-ctype v1.13.0 Symfony polyfill for ctype functions theseer/tokenizer 1.1.3 A small library for converting tokenized PHP source code into XML and potentially other formats webmozart/assert 1.6.0 Assertions to validate method input/output with nice error messages. ```
process
process isolation under phpdbg throws exceptions please do not report an issue for a version of phpunit that is no longer supported a list of currently supported versions of phpunit is available at please do not report an issue if you are using a version of php that is not supported by the version of phpunit you are using a list that shows which version of php is supported by which version of phpunit is available at please fill in this template according to your issue please keep the table shown below at the top of your issue please include the output of composer info sort if you installed phpunit using composer please post code as text using proper markup do not post screenshots of code visit if you are looking for support please remove this comment before submitting your issue q a phpunit version php version phpdbg sapi installation method composer summary use of process isolation throws exceptions under phpdbg at first i thought this was a regression of but it seems to be a separate cause i think was able to track the bug down to there s also similar code in both these lines attempt to read from stdout without first checking if it is a readable stream if i put in a debug statement using stream get meta data stdout you can see that stdout is not readable array bool false bool true bool false string php string stdio string wb int bool false string php stdout see also for repro code an under travis an under github actions current behavior any test that uses process isolation under phpdbg does not run and instead throws an exception similar to phpunit framework exception notice stream get contents read of bytes failed with errno bad file descriptor in standard input code on line how to reproduce run this test under phpdbg php php use phpunit framework testcase class reprotest extends testcase doesnotperformassertions runinseparateprocess public function testprocessisolation and see this output phpdbg qrr vendor bin phpunit phpunit by sebastian bergmann and contributors e time ms memory mb there was error reprotest testprocessisolation phpunit framework exception notice stream get contents read of bytes failed with errno bad file descriptor in standard input code on line errors tests assertions errors expected behavior process isolation should work for phpdbg in the same way as it does for other php sapis output of composer info sort doctrine instantiator a small lightweight utility to instantiate objects in php without invoking their constructors myclabs deep copy create deep copies clones of your objects phar io manifest component for reading phar io manifest information from a php archive phar phar io version library for handling version information and constraints phpdocumentor reflection common common reflection classes used by phpdocumentor to reflect the code structure phpdocumentor reflection docblock with this component a library can provide support for annotations via docblocks or otherwise retrieve information that is embedded in a docblock phpdocumentor type resolver a psr based resolver of class names types and structural element names phpspec prophecy highly opinionated mocking framework for php phpunit php code coverage library that provides collection processing and rendering functionality for php code coverage information phpunit php file iterator filteriterator implementation that filters files based on a list of suffixes phpunit php text template simple template engine phpunit php timer utility class for timing phpunit php token stream wrapper around php s tokenizer extension phpunit phpunit the php unit testing framework sebastian code unit reverse lookup looks up which function or method a line of code belongs to sebastian comparator provides the functionality to compare php values for equality sebastian diff diff implementation sebastian environment provides functionality to handle hhvm php environments sebastian exporter provides the functionality to export php variables for visualization sebastian global state snapshotting of global state sebastian object enumerator traverses array structures and object graphs to enumerate all referenced objects sebastian object reflector allows reflection of object attributes including inherited and non public ones sebastian recursion context provides functionality to recursively process php variables sebastian resource operations provides a list of php built in functions that operate on resources sebastian type collection of value objects that represent the types of the php type system sebastian version library that helps with managing the version number of git hosted php projects symfony polyfill ctype symfony polyfill for ctype functions theseer tokenizer a small library for converting tokenized php source code into xml and potentially other formats webmozart assert assertions to validate method input output with nice error messages
1
15,043
18,762,447,034
IssuesEvent
2021-11-05 18:10:23
googleapis/python-datacatalog
https://api.github.com/repos/googleapis/python-datacatalog
closed
Samples that depend on BigQuery are not compatible with Python 3.10
api: datacatalog type: process
`google-cloud-bigquery` does not yet support Python 3.10. When it does, the 3.10 samples check should turn green. For now, it is OK to merge PRs with a failing 3.10 samples check (the status check is intentionally optional). See https://github.com/googleapis/python-bigquery/issues/1006 for the status of python 3.10 support. [periodic build log](https://source.cloud.google.com/results/invocations/fe26a546-9c3f-414d-b36c-e9898fdadbac/targets) ``` ******************** TESTING PROJECTS ******************** ------------------------------------------------------------ - testing samples/quickstart ------------------------------------------------------------ No user noxfile_config found: detail: No module named 'noxfile_config' nox > Running session py-3.10 nox > Creating virtual environment (virtualenv) using python3.10 in .nox/py-3-10 nox > python -m pip install -r requirements.txt nox > python -m pip install -r requirements-test.txt nox > Command python -m pip install -r requirements-test.txt failed with exit code 1: Collecting pytest==6.2.5 Downloading pytest-6.2.5-py3-none-any.whl (280 kB) ERROR: Could not find a version that satisfies the requirement google-cloud-bigquery==2.28.0 (from versions: 0.20.0, 0.21.0, 0.22.0, 0.22.1, 0.23.0, 0.24.0, 0.25.0, 0.26.0, 0.27.0, 0.28.0, 0.29.0, 0.30.0, 0.31.0, 0.32.0, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.12.0, 1.12.1, 1.12.2, 1.13.0, 1.13.1, 1.14.0, 1.14.1, 1.15.0, 1.15.1, 1.16.0, 1.16.1, 1.17.0, 1.17.1, 1.18.0, 1.18.1, 1.19.0, 1.19.1, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.23.1, 1.24.0, 1.25.0, 1.26.0, 1.26.1, 1.27.2, 1.28.0, 2.0.0, 2.1.0, 2.2.0, 2.3.1, 2.4.0, 2.5.0, 2.6.0, 2.6.1) ERROR: No matching distribution found for google-cloud-bigquery==2.28.0 nox > Session py-3.10 failed. ```
1.0
Samples that depend on BigQuery are not compatible with Python 3.10 - `google-cloud-bigquery` does not yet support Python 3.10. When it does, the 3.10 samples check should turn green. For now, it is OK to merge PRs with a failing 3.10 samples check (the status check is intentionally optional). See https://github.com/googleapis/python-bigquery/issues/1006 for the status of python 3.10 support. [periodic build log](https://source.cloud.google.com/results/invocations/fe26a546-9c3f-414d-b36c-e9898fdadbac/targets) ``` ******************** TESTING PROJECTS ******************** ------------------------------------------------------------ - testing samples/quickstart ------------------------------------------------------------ No user noxfile_config found: detail: No module named 'noxfile_config' nox > Running session py-3.10 nox > Creating virtual environment (virtualenv) using python3.10 in .nox/py-3-10 nox > python -m pip install -r requirements.txt nox > python -m pip install -r requirements-test.txt nox > Command python -m pip install -r requirements-test.txt failed with exit code 1: Collecting pytest==6.2.5 Downloading pytest-6.2.5-py3-none-any.whl (280 kB) ERROR: Could not find a version that satisfies the requirement google-cloud-bigquery==2.28.0 (from versions: 0.20.0, 0.21.0, 0.22.0, 0.22.1, 0.23.0, 0.24.0, 0.25.0, 0.26.0, 0.27.0, 0.28.0, 0.29.0, 0.30.0, 0.31.0, 0.32.0, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.12.0, 1.12.1, 1.12.2, 1.13.0, 1.13.1, 1.14.0, 1.14.1, 1.15.0, 1.15.1, 1.16.0, 1.16.1, 1.17.0, 1.17.1, 1.18.0, 1.18.1, 1.19.0, 1.19.1, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.23.1, 1.24.0, 1.25.0, 1.26.0, 1.26.1, 1.27.2, 1.28.0, 2.0.0, 2.1.0, 2.2.0, 2.3.1, 2.4.0, 2.5.0, 2.6.0, 2.6.1) ERROR: No matching distribution found for google-cloud-bigquery==2.28.0 nox > Session py-3.10 failed. ```
process
samples that depend on bigquery are not compatible with python google cloud bigquery does not yet support python when it does the samples check should turn green for now it is ok to merge prs with a failing samples check the status check is intentionally optional see for the status of python support testing projects testing samples quickstart no user noxfile config found detail no module named noxfile config nox running session py nox creating virtual environment virtualenv using in nox py nox python m pip install r requirements txt nox python m pip install r requirements test txt nox command python m pip install r requirements test txt failed with exit code collecting pytest downloading pytest none any whl kb error could not find a version that satisfies the requirement google cloud bigquery from versions error no matching distribution found for google cloud bigquery nox session py failed
1
389
2,838,438,097
IssuesEvent
2015-05-27 07:40:59
ChelseaStats/issues
https://api.github.com/repos/ChelseaStats/issues
closed
PA_dugout May 26 2015 at 11:27PM
to process tweet ★ priority-medium
<blockquote class="twitter-tweet"> <p lang="en" dir="ltr" xml:lang="en">Chelsea boss Jose Mourinho has been named LMA Premier League Manager of the Year <a href="http://u.thechels.uk/1RlWytj">http://pic.twitter.com/SWXfywrgT9</a></p> &mdash; PA Dugout (@PA_dugout) <a href="https://twitter.com/PA_dugout/status/603326563959046144">May 26, 2015</a> </blockquote> <br><br> May 26, 2015 at 11:27PM<br> via Twitter<br><hr><br><br> http://twitter.com/PA_dugout/status/603326563959046144
1.0
PA_dugout May 26 2015 at 11:27PM - <blockquote class="twitter-tweet"> <p lang="en" dir="ltr" xml:lang="en">Chelsea boss Jose Mourinho has been named LMA Premier League Manager of the Year <a href="http://u.thechels.uk/1RlWytj">http://pic.twitter.com/SWXfywrgT9</a></p> &mdash; PA Dugout (@PA_dugout) <a href="https://twitter.com/PA_dugout/status/603326563959046144">May 26, 2015</a> </blockquote> <br><br> May 26, 2015 at 11:27PM<br> via Twitter<br><hr><br><br> http://twitter.com/PA_dugout/status/603326563959046144
process
pa dugout may at chelsea boss jose mourinho has been named lma premier league manager of the year a href mdash pa dugout pa dugout may at via twitter
1
14,948
18,428,165,489
IssuesEvent
2021-10-14 02:35:58
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Polygon to line algorithm does not recognise smooth output in Graphical modeler
Processing Bug Modeller
### What is the bug or the crash? In graphical modeler, I want to smooth polygons then extract the lines of theses smoothed polygons. The algorithm "polygon to lines" does not recognise the output from "smooth algorithm" ### Steps to reproduce the issue In graphical modeler: Create Smooth algorithm . Then create "polygon to lines" algorithm Check in the list of algorithms output: the "smoothed" output is not available. ![image](https://user-images.githubusercontent.com/26409106/137086955-d9c2dc8a-41b7-4f70-bb09-a8a683be6e8b.png) ### Versions QGIS 3.16.11 ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [x] I tried with a new QGIS profile ### Additional context In my case, I solved the issue by using first "polygon to lines" then "smooth" algorithm. But the bug would be interesting to solve anyway.
1.0
Polygon to line algorithm does not recognise smooth output in Graphical modeler - ### What is the bug or the crash? In graphical modeler, I want to smooth polygons then extract the lines of theses smoothed polygons. The algorithm "polygon to lines" does not recognise the output from "smooth algorithm" ### Steps to reproduce the issue In graphical modeler: Create Smooth algorithm . Then create "polygon to lines" algorithm Check in the list of algorithms output: the "smoothed" output is not available. ![image](https://user-images.githubusercontent.com/26409106/137086955-d9c2dc8a-41b7-4f70-bb09-a8a683be6e8b.png) ### Versions QGIS 3.16.11 ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [x] I tried with a new QGIS profile ### Additional context In my case, I solved the issue by using first "polygon to lines" then "smooth" algorithm. But the bug would be interesting to solve anyway.
process
polygon to line algorithm does not recognise smooth output in graphical modeler what is the bug or the crash in graphical modeler i want to smooth polygons then extract the lines of theses smoothed polygons the algorithm polygon to lines does not recognise the output from smooth algorithm steps to reproduce the issue in graphical modeler create smooth algorithm then create polygon to lines algorithm check in the list of algorithms output the smoothed output is not available versions qgis supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context in my case i solved the issue by using first polygon to lines then smooth algorithm but the bug would be interesting to solve anyway
1
21,056
28,005,361,411
IssuesEvent
2023-03-27 14:57:23
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
Random hangs and failures when sending tensors that are split using torch.split in a JoinableQueue
high priority module: multiprocessing triaged
### 🐛 Random hangs and failures when sending tensors that are split using torch.split in a JoinableQueue Splitting tensors using torch.split and sending them to processes using a JoinableQueue seems to cause random errors and hangs in 2.0.0.dev20230130+cu116, while works perfectly fine on 1.9.1+cu102 I tried to make the code to reproduce as small as I could. The key ingredients are torch.split and JoinableQueue. The following script hangs on my CUDA machine using PyTorch 2.0 while it completes successfully on PyTorch 1.9. ``` import os import sys import tempfile import torch import torch.distributed as dist import torch.multiprocessing as mp def setup(rank: int, world_size: int) -> None: backend = 'nccl' if torch.cuda.is_available() else 'gloo' dist.init_process_group(backend, init_method='tcp://{}'.format('127.0.0.1:23456'), rank=rank, world_size=world_size) def cleanup() -> None: dist.destroy_process_group() def demo_basic(rank: int, queue: mp.JoinableQueue, world_size: int) -> None: setup(rank, world_size) device = f'cuda:{rank}' if torch.cuda.is_available() else 'cpu' while True: batch = queue.get() batch = batch.to(device) try: negative_in_batch = batch.lt(0).all().item() if negative_in_batch: print("Found negative in batch", sys.stderr) finally: queue.task_done() def split_batch(batch: torch.Tensor, world_size: int) -> torch.Tensor: return torch.split(batch, batch.shape[0] // world_size) # if I use torch.split(batch, batch.shape[0] // world_size).clone() instead no error is observed def run_demo(world_size: int) -> None: print(torch.__version__, file=sys.stderr) num_batches = 10000 batch_size = 64 ctx = mp.get_context('spawn') queues = [ctx.JoinableQueue() for _ in range(world_size)] os.environ['MASTER_ADDR'] = '127.0.0.1' os.environ['MASTER_PORT'] = '29500' processes = [ctx.Process(target=demo_basic, args=(i, queues[i], world_size)) for i in range(world_size)] for p in processes: p.start() for i in range(num_batches): large_batch = torch.randint(100000, size=(batch_size,)) batches = split_batch(large_batch, world_size) # if I remove this line and send the large batch instead no error is observed print(f'queuing batch {i}', file=sys.stderr) for batch, queue in zip(batches, queues): queue.put(batch) for q in queues: q.join() for p in processes: p.terminate() def main() -> None: run_demo(4) if __name__ == '__main__': main() ``` On CPU the behaviour is more random I sometimes observe the following error after some runtime: ``` Traceback (most recent call last): File "/opt/conda/lib/python3.8/multiprocessing/queues.py", line 239, in _feed obj = _ForkingPickler.dumps(obj) File "/opt/conda/lib/python3.8/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 358, in reduce_storage metadata = storage._share_filename_cpu_() RuntimeError: Trying to resize storage that is not resizable ``` while sometime the code runs successfully. I verified that the code runs fine in 1.9.1+cu102 in both CPU and GPU but don't know about other versions. ### Versions Cuda environment: PyTorch version: 2.0.0.dev20230130+cu116 Is debug build: False CUDA used to build PyTorch: 11.6 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.6 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.27 Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.17 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz Stepping: 1 CPU MHz: 2700.202 CPU max MHz: 3000.0000 CPU min MHz: 1200.0000 BogoMIPS: 4600.03 Hypervisor vendor: Xen Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 46080K NUMA node0 CPU(s): 0-31 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtmrdseed adx xsaveopt Versions of relevant libraries: [pip3] numpy==1.24.1 [pip3] numpyro==0.6.0 [pip3] pytorch-triton==2.0.0+0d7e753227 [pip3] torch==2.0.0.dev20230130+cu116 [pip3] torchaudio==2.0.0.dev20230130+cu116 [pip3] torchvision==0.15.0.dev20230130+cu116 [conda] blas 1.0 mkl [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py38h7f8727e_0 [conda] mkl_fft 1.3.1 py38hd3c417c_0 [conda] mkl_random 1.2.2 py38h51133e4_0 [conda] numpy 1.24.1 pypi_0 pypi [conda] numpyro 0.6.0 pypi_0 pypi [conda] pytorch-triton 2.0.0+0d7e753227 pypi_0 pypi [conda] torch 2.0.0.dev20230130+cu116 pypi_0 pypi [conda] torchaudio 2.0.0.dev20230130+cu116 pypi_0 pypi [conda] torchvision 0.15.0.dev20230130+cu116 pypi_0 pypi CPU environment: PyTorch version: 2.0.0.dev20230130+cu116 Is debug build: False CUDA used to build PyTorch: 11.6 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.6 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.27 Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.49-linuxkit-x86_64-with-glibc2.17 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 6 On-line CPU(s) list: 0-5 Thread(s) per core: 1 Core(s) per socket: 6 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 158 Model name: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz Stepping: 10 CPU MHz: 2591.608 BogoMIPS: 5183.21 L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 12288K Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 avx2 smep bmi2 erms rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 arat Versions of relevant libraries: [pip3] numpy==1.24.1 [pip3] numpyro==0.6.0 [pip3] pytorch-triton==2.0.0+0d7e753227 [pip3] torch==2.0.0.dev20230130+cu116 [pip3] torchaudio==2.0.0.dev20230130+cu116 [pip3] torchvision==0.15.0.dev20230130+cu116 [conda] blas 1.0 mkl [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py38h7f8727e_0 [conda] mkl_fft 1.3.1 py38hd3c417c_0 [conda] mkl_random 1.2.2 py38h51133e4_0 [conda] numpy 1.24.1 pypi_0 pypi [conda] numpyro 0.6.0 pypi_0 pypi [conda] pytorch-triton 2.0.0+0d7e753227 pypi_0 pypi [conda] torch 2.0.0.dev20230130+cu116 pypi_0 pypi [conda] torchaudio 2.0.0.dev20230130+cu116 pypi_0 pypi [conda] torchvision 0.15.0.dev20230130+cu116 pypi_0 pypi cc @ezyang @gchanan @zou3519 @VitalyFedyunin @ejguan
1.0
Random hangs and failures when sending tensors that are split using torch.split in a JoinableQueue - ### 🐛 Random hangs and failures when sending tensors that are split using torch.split in a JoinableQueue Splitting tensors using torch.split and sending them to processes using a JoinableQueue seems to cause random errors and hangs in 2.0.0.dev20230130+cu116, while works perfectly fine on 1.9.1+cu102 I tried to make the code to reproduce as small as I could. The key ingredients are torch.split and JoinableQueue. The following script hangs on my CUDA machine using PyTorch 2.0 while it completes successfully on PyTorch 1.9. ``` import os import sys import tempfile import torch import torch.distributed as dist import torch.multiprocessing as mp def setup(rank: int, world_size: int) -> None: backend = 'nccl' if torch.cuda.is_available() else 'gloo' dist.init_process_group(backend, init_method='tcp://{}'.format('127.0.0.1:23456'), rank=rank, world_size=world_size) def cleanup() -> None: dist.destroy_process_group() def demo_basic(rank: int, queue: mp.JoinableQueue, world_size: int) -> None: setup(rank, world_size) device = f'cuda:{rank}' if torch.cuda.is_available() else 'cpu' while True: batch = queue.get() batch = batch.to(device) try: negative_in_batch = batch.lt(0).all().item() if negative_in_batch: print("Found negative in batch", sys.stderr) finally: queue.task_done() def split_batch(batch: torch.Tensor, world_size: int) -> torch.Tensor: return torch.split(batch, batch.shape[0] // world_size) # if I use torch.split(batch, batch.shape[0] // world_size).clone() instead no error is observed def run_demo(world_size: int) -> None: print(torch.__version__, file=sys.stderr) num_batches = 10000 batch_size = 64 ctx = mp.get_context('spawn') queues = [ctx.JoinableQueue() for _ in range(world_size)] os.environ['MASTER_ADDR'] = '127.0.0.1' os.environ['MASTER_PORT'] = '29500' processes = [ctx.Process(target=demo_basic, args=(i, queues[i], world_size)) for i in range(world_size)] for p in processes: p.start() for i in range(num_batches): large_batch = torch.randint(100000, size=(batch_size,)) batches = split_batch(large_batch, world_size) # if I remove this line and send the large batch instead no error is observed print(f'queuing batch {i}', file=sys.stderr) for batch, queue in zip(batches, queues): queue.put(batch) for q in queues: q.join() for p in processes: p.terminate() def main() -> None: run_demo(4) if __name__ == '__main__': main() ``` On CPU the behaviour is more random I sometimes observe the following error after some runtime: ``` Traceback (most recent call last): File "/opt/conda/lib/python3.8/multiprocessing/queues.py", line 239, in _feed obj = _ForkingPickler.dumps(obj) File "/opt/conda/lib/python3.8/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "/opt/conda/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 358, in reduce_storage metadata = storage._share_filename_cpu_() RuntimeError: Trying to resize storage that is not resizable ``` while sometime the code runs successfully. I verified that the code runs fine in 1.9.1+cu102 in both CPU and GPU but don't know about other versions. ### Versions Cuda environment: PyTorch version: 2.0.0.dev20230130+cu116 Is debug build: False CUDA used to build PyTorch: 11.6 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.6 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.27 Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.17 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz Stepping: 1 CPU MHz: 2700.202 CPU max MHz: 3000.0000 CPU min MHz: 1200.0000 BogoMIPS: 4600.03 Hypervisor vendor: Xen Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 46080K NUMA node0 CPU(s): 0-31 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtmrdseed adx xsaveopt Versions of relevant libraries: [pip3] numpy==1.24.1 [pip3] numpyro==0.6.0 [pip3] pytorch-triton==2.0.0+0d7e753227 [pip3] torch==2.0.0.dev20230130+cu116 [pip3] torchaudio==2.0.0.dev20230130+cu116 [pip3] torchvision==0.15.0.dev20230130+cu116 [conda] blas 1.0 mkl [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py38h7f8727e_0 [conda] mkl_fft 1.3.1 py38hd3c417c_0 [conda] mkl_random 1.2.2 py38h51133e4_0 [conda] numpy 1.24.1 pypi_0 pypi [conda] numpyro 0.6.0 pypi_0 pypi [conda] pytorch-triton 2.0.0+0d7e753227 pypi_0 pypi [conda] torch 2.0.0.dev20230130+cu116 pypi_0 pypi [conda] torchaudio 2.0.0.dev20230130+cu116 pypi_0 pypi [conda] torchvision 0.15.0.dev20230130+cu116 pypi_0 pypi CPU environment: PyTorch version: 2.0.0.dev20230130+cu116 Is debug build: False CUDA used to build PyTorch: 11.6 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.6 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: version 3.25.0 Libc version: glibc-2.27 Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.15.49-linuxkit-x86_64-with-glibc2.17 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 6 On-line CPU(s) list: 0-5 Thread(s) per core: 1 Core(s) per socket: 6 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 158 Model name: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz Stepping: 10 CPU MHz: 2591.608 BogoMIPS: 5183.21 L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 12288K Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 avx2 smep bmi2 erms rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 arat Versions of relevant libraries: [pip3] numpy==1.24.1 [pip3] numpyro==0.6.0 [pip3] pytorch-triton==2.0.0+0d7e753227 [pip3] torch==2.0.0.dev20230130+cu116 [pip3] torchaudio==2.0.0.dev20230130+cu116 [pip3] torchvision==0.15.0.dev20230130+cu116 [conda] blas 1.0 mkl [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py38h7f8727e_0 [conda] mkl_fft 1.3.1 py38hd3c417c_0 [conda] mkl_random 1.2.2 py38h51133e4_0 [conda] numpy 1.24.1 pypi_0 pypi [conda] numpyro 0.6.0 pypi_0 pypi [conda] pytorch-triton 2.0.0+0d7e753227 pypi_0 pypi [conda] torch 2.0.0.dev20230130+cu116 pypi_0 pypi [conda] torchaudio 2.0.0.dev20230130+cu116 pypi_0 pypi [conda] torchvision 0.15.0.dev20230130+cu116 pypi_0 pypi cc @ezyang @gchanan @zou3519 @VitalyFedyunin @ejguan
process
random hangs and failures when sending tensors that are split using torch split in a joinablequeue 🐛 random hangs and failures when sending tensors that are split using torch split in a joinablequeue splitting tensors using torch split and sending them to processes using a joinablequeue seems to cause random errors and hangs in while works perfectly fine on i tried to make the code to reproduce as small as i could the key ingredients are torch split and joinablequeue the following script hangs on my cuda machine using pytorch while it completes successfully on pytorch import os import sys import tempfile import torch import torch distributed as dist import torch multiprocessing as mp def setup rank int world size int none backend nccl if torch cuda is available else gloo dist init process group backend init method tcp format rank rank world size world size def cleanup none dist destroy process group def demo basic rank int queue mp joinablequeue world size int none setup rank world size device f cuda rank if torch cuda is available else cpu while true batch queue get batch batch to device try negative in batch batch lt all item if negative in batch print found negative in batch sys stderr finally queue task done def split batch batch torch tensor world size int torch tensor return torch split batch batch shape world size if i use torch split batch batch shape world size clone instead no error is observed def run demo world size int none print torch version file sys stderr num batches batch size ctx mp get context spawn queues os environ os environ processes world size for i in range world size for p in processes p start for i in range num batches large batch torch randint size batch size batches split batch large batch world size if i remove this line and send the large batch instead no error is observed print f queuing batch i file sys stderr for batch queue in zip batches queues queue put batch for q in queues q join for p in processes p terminate def main none run demo if name main main on cpu the behaviour is more random i sometimes observe the following error after some runtime traceback most recent call last file opt conda lib multiprocessing queues py line in feed obj forkingpickler dumps obj file opt conda lib multiprocessing reduction py line in dumps cls buf protocol dump obj file opt conda lib site packages torch multiprocessing reductions py line in reduce storage metadata storage share filename cpu runtimeerror trying to resize storage that is not resizable while sometime the code runs successfully i verified that the code runs fine in in both cpu and gpu but don t know about other versions versions cuda environment pytorch version is debug build false cuda used to build pytorch rocm used to build pytorch n a os ubuntu lts gcc version ubuntu clang version could not collect cmake version version libc version glibc python version default oct bit runtime python platform linux with is cuda available false cuda runtime version no cuda cuda module loading set to n a gpu models and configuration no cuda nvidia driver version no cuda cudnn version no cuda hip runtime version n a miopen runtime version n a is xnnpack available true cpu architecture cpu op mode s bit bit byte order little endian cpu s on line cpu s list thread s per core core s per socket socket s numa node s vendor id genuineintel cpu family model model name intel r xeon r cpu stepping cpu mhz cpu max mhz cpu min mhz bogomips hypervisor vendor xen virtualization type full cache cache cache cache numa cpu s flags fpu vme de pse tsc msr pae mce apic sep mtrr pge mca cmov pat clflush mmx fxsr sse ht syscall nx rdtscp lm constant tsc rep good nopl xtopology nonstop tsc cpuid aperfmperf pni pclmulqdq fma pcid movbe popcnt tsc deadline timer aes xsave avx rdrand hypervisor lahf lm abm cpuid fault invpcid single pti fsgsbase hle smep erms invpcid rtmrdseed adx xsaveopt versions of relevant libraries numpy numpyro pytorch triton torch torchaudio torchvision blas mkl mkl mkl service mkl fft mkl random numpy pypi pypi numpyro pypi pypi pytorch triton pypi pypi torch pypi pypi torchaudio pypi pypi torchvision pypi pypi cpu environment pytorch version is debug build false cuda used to build pytorch rocm used to build pytorch n a os ubuntu lts gcc version ubuntu clang version could not collect cmake version version libc version glibc python version default oct bit runtime python platform linux linuxkit with is cuda available false cuda runtime version no cuda cuda module loading set to n a gpu models and configuration no cuda nvidia driver version no cuda cudnn version no cuda hip runtime version n a miopen runtime version n a is xnnpack available true cpu architecture cpu op mode s bit bit byte order little endian cpu s on line cpu s list thread s per core core s per socket socket s vendor id genuineintel cpu family model model name intel r core tm cpu stepping cpu mhz bogomips cache cache cache cache flags fpu vme de pse tsc msr pae mce apic sep mtrr pge mca cmov pat clflush mmx fxsr sse ht syscall nx lm constant tsc rep good nopl xtopology nonstop tsc cpuid pni pclmulqdq fma pcid movbe popcnt aes xsave avx rdrand hypervisor lahf lm abm fsgsbase smep erms rdseed adx smap clflushopt xsaveopt xsavec arat versions of relevant libraries numpy numpyro pytorch triton torch torchaudio torchvision blas mkl mkl mkl service mkl fft mkl random numpy pypi pypi numpyro pypi pypi pytorch triton pypi pypi torch pypi pypi torchaudio pypi pypi torchvision pypi pypi cc ezyang gchanan vitalyfedyunin ejguan
1
11,552
14,435,151,346
IssuesEvent
2020-12-07 08:18:59
MelissaMorales13/4a
https://api.github.com/repos/MelissaMorales13/4a
closed
fill_size_estimating_template
process-dashboard
-llenado de template de estimación de líneas de código en process Dashboard -correr el PROBE wizard
1.0
fill_size_estimating_template - -llenado de template de estimación de líneas de código en process Dashboard -correr el PROBE wizard
process
fill size estimating template llenado de template de estimación de líneas de código en process dashboard correr el probe wizard
1
205,283
15,964,836,815
IssuesEvent
2021-04-16 06:54:19
icytornado/pe
https://api.github.com/repos/icytornado/pe
closed
edit selected command confusing/ not working
severity.Medium type.DocumentationBug
steps to reproduce: 1.edit selected - p 84311456 expected the selected is edited actual: No selected person(s) to edit warning. Cannot select person, do you mean selecting the person by clicking on it. How do i select a person exactlly? state clearly in the UG please ![Screenshot 2021-04-16 at 2.44.47 PM.png](https://raw.githubusercontent.com/icytornado/pe/main/files/a6ae1392-299f-4a75-8140-27675789f9be.png) <!--session: 1618553634087-453d3125-eee2-4a72-b66c-3f70318ee1a1-->
1.0
edit selected command confusing/ not working - steps to reproduce: 1.edit selected - p 84311456 expected the selected is edited actual: No selected person(s) to edit warning. Cannot select person, do you mean selecting the person by clicking on it. How do i select a person exactlly? state clearly in the UG please ![Screenshot 2021-04-16 at 2.44.47 PM.png](https://raw.githubusercontent.com/icytornado/pe/main/files/a6ae1392-299f-4a75-8140-27675789f9be.png) <!--session: 1618553634087-453d3125-eee2-4a72-b66c-3f70318ee1a1-->
non_process
edit selected command confusing not working steps to reproduce edit selected p expected the selected is edited actual no selected person s to edit warning cannot select person do you mean selecting the person by clicking on it how do i select a person exactlly state clearly in the ug please
0
109,716
23,812,026,163
IssuesEvent
2022-09-04 22:23:24
files-community/Files
https://api.github.com/repos/files-community/Files
opened
Nullable reference types annotations
codebase quality triage approved good first issue
We are migrating to WASDK and adopt .NET 6, so we get build-in nullable reference types support. We should annotate our code to eliminate thousands of warnings about NRT. This is a large one, but we can do it progressively.
1.0
Nullable reference types annotations - We are migrating to WASDK and adopt .NET 6, so we get build-in nullable reference types support. We should annotate our code to eliminate thousands of warnings about NRT. This is a large one, but we can do it progressively.
non_process
nullable reference types annotations we are migrating to wasdk and adopt net so we get build in nullable reference types support we should annotate our code to eliminate thousands of warnings about nrt this is a large one but we can do it progressively
0
20,955
4,650,991,076
IssuesEvent
2016-10-03 08:15:58
syl20bnr/spacemacs
https://api.github.com/repos/syl20bnr/spacemacs
closed
Mismatch of "package.el" and "LAYERS.org"
- Bug tracker - Documentation ✏ Fixed in develop
#### Description When using "configuration-layer/create-layer" to create a new layer, the "package.el" uses a keyword "defconst", while "LAYERS.org" uses "setq" #### Reproduction guide - Start Emacd:/home/cyd/.spacemacs.private/my-cygwin/ #### System Info - OS: windows-nt - Emacs: 24.5.1 - Spacemacs: 0.105.21 - Spacemacs branch: cyd (rev. 0283f64) - Graphic display: t - Distribution: spacemacs - Editing style: emacs - Completion: helm - Layers: ```elisp ((auto-completion :variables auto-completion-return-key-behavior 'complete auto-completion-tab-key-behavior 'cycle auto-completion-enable-snippets-in-popup t auto-completion-complete-with-key-sequence nil auto-completion-complete-with-key-sequence-delay 0.1 auto-completion-private-snippets-directory (concat (car dotspacemacs-configuration-layer-path) "snippets/") auto-completion-enable-sort-by-usage t) better-defaults emacs-lisp git org (shell :variables shell-default-height 30 shell-default-position 'bottom) spell-checking syntax-checking themes-megapack chinese my-cygwin) ```
1.0
Mismatch of "package.el" and "LAYERS.org" - #### Description When using "configuration-layer/create-layer" to create a new layer, the "package.el" uses a keyword "defconst", while "LAYERS.org" uses "setq" #### Reproduction guide - Start Emacd:/home/cyd/.spacemacs.private/my-cygwin/ #### System Info - OS: windows-nt - Emacs: 24.5.1 - Spacemacs: 0.105.21 - Spacemacs branch: cyd (rev. 0283f64) - Graphic display: t - Distribution: spacemacs - Editing style: emacs - Completion: helm - Layers: ```elisp ((auto-completion :variables auto-completion-return-key-behavior 'complete auto-completion-tab-key-behavior 'cycle auto-completion-enable-snippets-in-popup t auto-completion-complete-with-key-sequence nil auto-completion-complete-with-key-sequence-delay 0.1 auto-completion-private-snippets-directory (concat (car dotspacemacs-configuration-layer-path) "snippets/") auto-completion-enable-sort-by-usage t) better-defaults emacs-lisp git org (shell :variables shell-default-height 30 shell-default-position 'bottom) spell-checking syntax-checking themes-megapack chinese my-cygwin) ```
non_process
mismatch of package el and layers org description when using configuration layer create layer to create a new layer the package el uses a keyword defconst while layers org uses setq reproduction guide start emacd home cyd spacemacs private my cygwin system info os windows nt emacs spacemacs spacemacs branch cyd rev graphic display t distribution spacemacs editing style emacs completion helm layers elisp auto completion variables auto completion return key behavior complete auto completion tab key behavior cycle auto completion enable snippets in popup t auto completion complete with key sequence nil auto completion complete with key sequence delay auto completion private snippets directory concat car dotspacemacs configuration layer path snippets auto completion enable sort by usage t better defaults emacs lisp git org shell variables shell default height shell default position bottom spell checking syntax checking themes megapack chinese my cygwin
0
4,846
7,739,846,889
IssuesEvent
2018-05-28 17:56:22
codeforireland2/transparentwater_backend_new
https://api.github.com/repos/codeforireland2/transparentwater_backend_new
opened
Enable jsdoc for all code
development process
This is already set up for the mobile app. We should enable js doc in eslint to force documentation of all functions where relevant.
1.0
Enable jsdoc for all code - This is already set up for the mobile app. We should enable js doc in eslint to force documentation of all functions where relevant.
process
enable jsdoc for all code this is already set up for the mobile app we should enable js doc in eslint to force documentation of all functions where relevant
1
320,592
9,783,326,853
IssuesEvent
2019-06-08 08:59:41
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
closed
Cannot run not-Nil returning function if that function body contains a lambda function reference
Component/jBallerina Priority/Blocker Type/Bug
Please consider the following code ```ballerina import ballerina/io; public function main() { io:println("Test"); } function testStreamPublishingAndSubscriptionForAssignableTupleTypeStream(string s1) returns int { test("lambda", zzz); return 0; } function test(string val, function (string) x) { x.call(val); } function zzz((string) val) { io:println(val); } ``` Above code will be compiled. But cannot be run. Following will be thrown java.lang.VerifyError: Inconsistent stackmap frames at branch target 223 Exception Details: Location: ``` test.testStreamPublishingAndSubscriptionForAssignableTupleTypeStream(Lorg/ballerinalang/jvm/Strand;Ljava/lang/String;Z)J @223: aload_0 Reason: Type null (current frame, locals[5]) is not assignable to integer (stack map, locals[5]) Current Frame: bci: @28 flags: { } locals: { 'org/ballerinalang/jvm/Strand', 'java/lang/String', long, long_2nd, null, null, integer } stack: { integer } Stackmap Frame: bci: @223 flags: { } locals: { 'org/ballerinalang/jvm/Strand', 'java/lang/String', long, long_2nd, null, integer } stack: { } ```
1.0
Cannot run not-Nil returning function if that function body contains a lambda function reference - Please consider the following code ```ballerina import ballerina/io; public function main() { io:println("Test"); } function testStreamPublishingAndSubscriptionForAssignableTupleTypeStream(string s1) returns int { test("lambda", zzz); return 0; } function test(string val, function (string) x) { x.call(val); } function zzz((string) val) { io:println(val); } ``` Above code will be compiled. But cannot be run. Following will be thrown java.lang.VerifyError: Inconsistent stackmap frames at branch target 223 Exception Details: Location: ``` test.testStreamPublishingAndSubscriptionForAssignableTupleTypeStream(Lorg/ballerinalang/jvm/Strand;Ljava/lang/String;Z)J @223: aload_0 Reason: Type null (current frame, locals[5]) is not assignable to integer (stack map, locals[5]) Current Frame: bci: @28 flags: { } locals: { 'org/ballerinalang/jvm/Strand', 'java/lang/String', long, long_2nd, null, null, integer } stack: { integer } Stackmap Frame: bci: @223 flags: { } locals: { 'org/ballerinalang/jvm/Strand', 'java/lang/String', long, long_2nd, null, integer } stack: { } ```
non_process
cannot run not nil returning function if that function body contains a lambda function reference please consider the following code ballerina import ballerina io public function main io println test function teststreampublishingandsubscriptionforassignabletupletypestream string returns int test lambda zzz return function test string val function string x x call val function zzz string val io println val above code will be compiled but cannot be run following will be thrown java lang verifyerror inconsistent stackmap frames at branch target exception details location test teststreampublishingandsubscriptionforassignabletupletypestream lorg ballerinalang jvm strand ljava lang string z j aload reason type null current frame locals is not assignable to integer stack map locals current frame bci flags locals org ballerinalang jvm strand java lang string long long null null integer stack integer stackmap frame bci flags locals org ballerinalang jvm strand java lang string long long null integer stack
0
8,780
11,901,098,941
IssuesEvent
2020-03-30 11:51:36
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
reopened
mutualism
multi-species process
Finds this term GO:0085030 | symbiotic process benefiting host defined GO:0085030 JSON Definition (GO:0085030 GONUTS page) A process carried out by symbiont gene products that enables a symbiotic interaction with a host organism, that is beneficial to the host organism. related 'mutualism' So, there isn't a term for mutalism (both the pathogen and host benefit?) This is probably because if you are annotating a pathogen gene it probably didn't *evolve* to benefit the host. Just wanted to check this is the reason. @CuzickA
1.0
mutualism - Finds this term GO:0085030 | symbiotic process benefiting host defined GO:0085030 JSON Definition (GO:0085030 GONUTS page) A process carried out by symbiont gene products that enables a symbiotic interaction with a host organism, that is beneficial to the host organism. related 'mutualism' So, there isn't a term for mutalism (both the pathogen and host benefit?) This is probably because if you are annotating a pathogen gene it probably didn't *evolve* to benefit the host. Just wanted to check this is the reason. @CuzickA
process
mutualism finds this term go symbiotic process benefiting host defined go json definition go gonuts page a process carried out by symbiont gene products that enables a symbiotic interaction with a host organism that is beneficial to the host organism related mutualism so there isn t a term for mutalism both the pathogen and host benefit this is probably because if you are annotating a pathogen gene it probably didn t evolve to benefit the host just wanted to check this is the reason cuzicka
1
7,737
10,855,062,665
IssuesEvent
2019-11-13 17:35:09
codeuniversity/smag-mvp
https://api.github.com/repos/codeuniversity/smag-mvp
opened
[Images] Object recognition model to analyse interests
Image Processing
To receive interests from photos, we need an additional general object recognition model.
1.0
[Images] Object recognition model to analyse interests - To receive interests from photos, we need an additional general object recognition model.
process
object recognition model to analyse interests to receive interests from photos we need an additional general object recognition model
1
518,029
15,022,722,723
IssuesEvent
2021-02-01 17:17:46
godaddy-wordpress/coblocks
https://api.github.com/repos/godaddy-wordpress/coblocks
closed
Gist Block: Console error message
[Priority] Low [Type] Bug
### Describe the bug: Add the Gist Block to page or post and It will produce an error in the console when you will try to add some random text rather than adding GitHub URL. ### To reproduce: 1. Go to 'Gist block' 2. Click on 'Textbox provided by gist block' 3. Now try to add some random text in that 4. See error in browser console ### Expected behavior: The error should not be there. ### Screenshots: ![Screenshot from 2021-01-21 16-33-38](https://user-images.githubusercontent.com/25550562/105343760-0e0a9380-5c08-11eb-9abf-ed231739e324.png) ### Isolating the problem: <!-- Mark completed items with an [x]. --> - [x] This bug happens with no other plugins activated - [x] This bug happens with a default WordPress theme active - [x] This bug happens **without** the Gutenberg plugin active - [x] I can reproduce this bug consistently using the steps above ### WordPress version: Latest 5.6 ### Gutenberg version: <!-- if applicable -->
1.0
Gist Block: Console error message - ### Describe the bug: Add the Gist Block to page or post and It will produce an error in the console when you will try to add some random text rather than adding GitHub URL. ### To reproduce: 1. Go to 'Gist block' 2. Click on 'Textbox provided by gist block' 3. Now try to add some random text in that 4. See error in browser console ### Expected behavior: The error should not be there. ### Screenshots: ![Screenshot from 2021-01-21 16-33-38](https://user-images.githubusercontent.com/25550562/105343760-0e0a9380-5c08-11eb-9abf-ed231739e324.png) ### Isolating the problem: <!-- Mark completed items with an [x]. --> - [x] This bug happens with no other plugins activated - [x] This bug happens with a default WordPress theme active - [x] This bug happens **without** the Gutenberg plugin active - [x] I can reproduce this bug consistently using the steps above ### WordPress version: Latest 5.6 ### Gutenberg version: <!-- if applicable -->
non_process
gist block console error message describe the bug add the gist block to page or post and it will produce an error in the console when you will try to add some random text rather than adding github url to reproduce go to gist block click on textbox provided by gist block now try to add some random text in that see error in browser console expected behavior the error should not be there screenshots isolating the problem this bug happens with no other plugins activated this bug happens with a default wordpress theme active this bug happens without the gutenberg plugin active i can reproduce this bug consistently using the steps above wordpress version latest gutenberg version
0
13,508
16,047,418,597
IssuesEvent
2021-04-22 15:03:39
JitenPalaparthi/readyGo
https://api.github.com/repos/JitenPalaparthi/readyGo
closed
lint and fix the code
continuous process improvement
golangci-lint is a good static analysis tool. There seems to be lot of issues. All to be fixed,
1.0
lint and fix the code - golangci-lint is a good static analysis tool. There seems to be lot of issues. All to be fixed,
process
lint and fix the code golangci lint is a good static analysis tool there seems to be lot of issues all to be fixed
1
5,459
8,320,090,531
IssuesEvent
2018-09-25 19:07:34
GoogleCloudPlatform/google-cloud-python
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
opened
BigQuery: 'test_load_table_from_uri_then_dump_table' flakes w/ 429 in bucket teardown
api: bigquery flaky testing type: process
From: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/8387 ```python ____________ TestBigQuery.test_load_table_from_uri_then_dump_table _____________ self = <tests.system.TestBigQuery testMethod=test_load_table_from_uri_then_dump_table> def tearDown(self): from google.cloud.storage import Bucket from google.cloud.exceptions import BadRequest from google.cloud.exceptions import Conflict def _still_in_use(bad_request): return any(error['reason'] == 'resourceInUse' for error in bad_request._errors) retry_in_use = RetryErrors(BadRequest, error_predicate=_still_in_use) retry_409 = RetryErrors(Conflict) for doomed in self.to_delete: if isinstance(doomed, Bucket): > retry_409(doomed.delete)(force=True) tests/system.py:150: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../test_utils/test_utils/retry.py:95: in wrapped_function return to_wrap(*args, **kwargs) ../storage/google/cloud/storage/bucket.py:757: in delete _target_object=None) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.storage._http.Connection object at 0x7f8b7d830410> method = 'DELETE', path = '/b/bq_load_test_8387_1537899235', query_params = {} data = None, content_type = None, headers = None, api_base_url = None api_version = None, expect_json = True, _target_object = None def api_request(self, method, path, query_params=None, data=None, content_type=None, headers=None, api_base_url=None, api_version=None, expect_json=True, _target_object=None): """Make a request over the HTTP transport to the API. You shouldn't need to use this method, but if you plan to interact with the API using these primitives, this is the correct one to use. :type method: str :param method: The HTTP method name (ie, ``GET``, ``POST``, etc). Required. :type path: str :param path: The path to the resource (ie, ``'/b/bucket-name'``). Required. :type query_params: dict or list :param query_params: A dictionary of keys and values (or list of key-value pairs) to insert into the query string of the URL. :type data: str :param data: The data to send as the body of the request. Default is the empty string. :type content_type: str :param content_type: The proper MIME type of the data provided. Default is None. :type headers: dict :param headers: extra HTTP headers to be sent with the request. :type api_base_url: str :param api_base_url: The base URL for the API endpoint. Typically you won't have to provide this. Default is the standard API base URL. :type api_version: str :param api_version: The version of the API to call. Typically you shouldn't provide this and instead use the default for the library. Default is the latest API version supported by google-cloud-python. :type expect_json: bool :param expect_json: If True, this method will try to parse the response as JSON and raise an exception if that cannot be done. Default is True. :type _target_object: :class:`object` :param _target_object: (Optional) Protected argument to be used by library callers. This can allow custom behavior, for example, to defer an HTTP request and complete initialization of the object at a later time. :raises ~google.cloud.exceptions.GoogleCloudError: if the response code is not 200 OK. :raises ValueError: if the response content type is not JSON. :rtype: dict or str :returns: The API response payload, either as a raw string or a dictionary if the response is valid JSON. """ url = self.build_api_url(path=path, query_params=query_params, api_base_url=api_base_url, api_version=api_version) # Making the executive decision that any dictionary # data will be sent properly as JSON. if data and isinstance(data, dict): data = json.dumps(data) content_type = 'application/json' response = self._make_request( method=method, url=url, data=data, content_type=content_type, headers=headers, target_object=_target_object) if not 200 <= response.status_code < 300: > raise exceptions.from_http_response(response) E TooManyRequests: 429 DELETE https://www.googleapis.com/storage/v1/b/bq_load_test_8387_1537899235: The project exceeded the rate limit for creating and deleting buckets. ```
1.0
BigQuery: 'test_load_table_from_uri_then_dump_table' flakes w/ 429 in bucket teardown - From: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/8387 ```python ____________ TestBigQuery.test_load_table_from_uri_then_dump_table _____________ self = <tests.system.TestBigQuery testMethod=test_load_table_from_uri_then_dump_table> def tearDown(self): from google.cloud.storage import Bucket from google.cloud.exceptions import BadRequest from google.cloud.exceptions import Conflict def _still_in_use(bad_request): return any(error['reason'] == 'resourceInUse' for error in bad_request._errors) retry_in_use = RetryErrors(BadRequest, error_predicate=_still_in_use) retry_409 = RetryErrors(Conflict) for doomed in self.to_delete: if isinstance(doomed, Bucket): > retry_409(doomed.delete)(force=True) tests/system.py:150: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../test_utils/test_utils/retry.py:95: in wrapped_function return to_wrap(*args, **kwargs) ../storage/google/cloud/storage/bucket.py:757: in delete _target_object=None) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.cloud.storage._http.Connection object at 0x7f8b7d830410> method = 'DELETE', path = '/b/bq_load_test_8387_1537899235', query_params = {} data = None, content_type = None, headers = None, api_base_url = None api_version = None, expect_json = True, _target_object = None def api_request(self, method, path, query_params=None, data=None, content_type=None, headers=None, api_base_url=None, api_version=None, expect_json=True, _target_object=None): """Make a request over the HTTP transport to the API. You shouldn't need to use this method, but if you plan to interact with the API using these primitives, this is the correct one to use. :type method: str :param method: The HTTP method name (ie, ``GET``, ``POST``, etc). Required. :type path: str :param path: The path to the resource (ie, ``'/b/bucket-name'``). Required. :type query_params: dict or list :param query_params: A dictionary of keys and values (or list of key-value pairs) to insert into the query string of the URL. :type data: str :param data: The data to send as the body of the request. Default is the empty string. :type content_type: str :param content_type: The proper MIME type of the data provided. Default is None. :type headers: dict :param headers: extra HTTP headers to be sent with the request. :type api_base_url: str :param api_base_url: The base URL for the API endpoint. Typically you won't have to provide this. Default is the standard API base URL. :type api_version: str :param api_version: The version of the API to call. Typically you shouldn't provide this and instead use the default for the library. Default is the latest API version supported by google-cloud-python. :type expect_json: bool :param expect_json: If True, this method will try to parse the response as JSON and raise an exception if that cannot be done. Default is True. :type _target_object: :class:`object` :param _target_object: (Optional) Protected argument to be used by library callers. This can allow custom behavior, for example, to defer an HTTP request and complete initialization of the object at a later time. :raises ~google.cloud.exceptions.GoogleCloudError: if the response code is not 200 OK. :raises ValueError: if the response content type is not JSON. :rtype: dict or str :returns: The API response payload, either as a raw string or a dictionary if the response is valid JSON. """ url = self.build_api_url(path=path, query_params=query_params, api_base_url=api_base_url, api_version=api_version) # Making the executive decision that any dictionary # data will be sent properly as JSON. if data and isinstance(data, dict): data = json.dumps(data) content_type = 'application/json' response = self._make_request( method=method, url=url, data=data, content_type=content_type, headers=headers, target_object=_target_object) if not 200 <= response.status_code < 300: > raise exceptions.from_http_response(response) E TooManyRequests: 429 DELETE https://www.googleapis.com/storage/v1/b/bq_load_test_8387_1537899235: The project exceeded the rate limit for creating and deleting buckets. ```
process
bigquery test load table from uri then dump table flakes w in bucket teardown from python testbigquery test load table from uri then dump table self def teardown self from google cloud storage import bucket from google cloud exceptions import badrequest from google cloud exceptions import conflict def still in use bad request return any error resourceinuse for error in bad request errors retry in use retryerrors badrequest error predicate still in use retry retryerrors conflict for doomed in self to delete if isinstance doomed bucket retry doomed delete force true tests system py test utils test utils retry py in wrapped function return to wrap args kwargs storage google cloud storage bucket py in delete target object none self method delete path b bq load test query params data none content type none headers none api base url none api version none expect json true target object none def api request self method path query params none data none content type none headers none api base url none api version none expect json true target object none make a request over the http transport to the api you shouldn t need to use this method but if you plan to interact with the api using these primitives this is the correct one to use type method str param method the http method name ie get post etc required type path str param path the path to the resource ie b bucket name required type query params dict or list param query params a dictionary of keys and values or list of key value pairs to insert into the query string of the url type data str param data the data to send as the body of the request default is the empty string type content type str param content type the proper mime type of the data provided default is none type headers dict param headers extra http headers to be sent with the request type api base url str param api base url the base url for the api endpoint typically you won t have to provide this default is the standard api base url type api version str param api version the version of the api to call typically you shouldn t provide this and instead use the default for the library default is the latest api version supported by google cloud python type expect json bool param expect json if true this method will try to parse the response as json and raise an exception if that cannot be done default is true type target object class object param target object optional protected argument to be used by library callers this can allow custom behavior for example to defer an http request and complete initialization of the object at a later time raises google cloud exceptions googleclouderror if the response code is not ok raises valueerror if the response content type is not json rtype dict or str returns the api response payload either as a raw string or a dictionary if the response is valid json url self build api url path path query params query params api base url api base url api version api version making the executive decision that any dictionary data will be sent properly as json if data and isinstance data dict data json dumps data content type application json response self make request method method url url data data content type content type headers headers target object target object if not response status code raise exceptions from http response response e toomanyrequests delete the project exceeded the rate limit for creating and deleting buckets
1
4,624
7,468,696,503
IssuesEvent
2018-04-02 19:56:09
syndesisio/syndesis
https://api.github.com/repos/syndesisio/syndesis
closed
UX: Add navigation to the UX Tracker
cat/process group/uxd
## This is a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Feature request [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [x] Documentation issue or request </code></pre> ## The problem The UX work tracker (#2113) should include two items on the nav bar in addition to the landing page showing the actual work being tracked. The nav should be "Home | Icons | Style Guide" The Icons should link to this location: https://github.com/syndesisio/syndesis/tree/master/ux/connections_icons The Style Guide will be developed. For now, the link should display a blank page with text like "coming soon."
1.0
UX: Add navigation to the UX Tracker - ## This is a... <!-- Check one of the following options with "x" --> <pre><code> [ ] Feature request [ ] Regression (a behavior that used to work and stopped working in a new release) [ ] Bug report <!-- Please search GitHub for a similar issue or PR before submitting --> [x] Documentation issue or request </code></pre> ## The problem The UX work tracker (#2113) should include two items on the nav bar in addition to the landing page showing the actual work being tracked. The nav should be "Home | Icons | Style Guide" The Icons should link to this location: https://github.com/syndesisio/syndesis/tree/master/ux/connections_icons The Style Guide will be developed. For now, the link should display a blank page with text like "coming soon."
process
ux add navigation to the ux tracker this is a feature request regression a behavior that used to work and stopped working in a new release bug report documentation issue or request the problem the ux work tracker should include two items on the nav bar in addition to the landing page showing the actual work being tracked the nav should be home icons style guide the icons should link to this location the style guide will be developed for now the link should display a blank page with text like coming soon
1
9,736
12,731,824,960
IssuesEvent
2020-06-25 09:25:15
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Strange @@relation error message when running prisma generate
bug/2-confirmed kind/bug process/candidate team/engines topic: schema validation
<!-- Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description Given this schema ```prisma generator js { provider = "prisma-client-js" } datasource pg { provider = "postgresql" url = "postgresql://m@localhost:5432/prisma" } model Post { content String? createdAt DateTime id String @id published Boolean @default(false) title String @default("") updatedAt DateTime author User? } model User { email String @default("") @unique id String @id name String? post Post[] } ``` When I run `./node_modules/.bin/prisma generate`, I get the following error: ``` Error: Schema parsing error: Error parsing attribute "@@relation": The relation field `author` on Model `Post` must specify the `fields` argument in the @relation directive. --> schema.prisma:17 | 16 | updatedAt DateTime 17 | author User? 18 | } | ``` ## Expected behavior I think I need to add `@relation(fields:[id])` directive, but this error has thrown me off. ## Environment & setup - OS: OSX - Database: Postgres - Prisma version: "@prisma/cli": "2.1.0", - Node.js version: v12.16.1
1.0
Strange @@relation error message when running prisma generate - <!-- Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description Given this schema ```prisma generator js { provider = "prisma-client-js" } datasource pg { provider = "postgresql" url = "postgresql://m@localhost:5432/prisma" } model Post { content String? createdAt DateTime id String @id published Boolean @default(false) title String @default("") updatedAt DateTime author User? } model User { email String @default("") @unique id String @id name String? post Post[] } ``` When I run `./node_modules/.bin/prisma generate`, I get the following error: ``` Error: Schema parsing error: Error parsing attribute "@@relation": The relation field `author` on Model `Post` must specify the `fields` argument in the @relation directive. --> schema.prisma:17 | 16 | updatedAt DateTime 17 | author User? 18 | } | ``` ## Expected behavior I think I need to add `@relation(fields:[id])` directive, but this error has thrown me off. ## Environment & setup - OS: OSX - Database: Postgres - Prisma version: "@prisma/cli": "2.1.0", - Node.js version: v12.16.1
process
strange relation error message when running prisma generate thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description given this schema prisma generator js provider prisma client js datasource pg provider postgresql url postgresql m localhost prisma model post content string createdat datetime id string id published boolean default false title string default updatedat datetime author user model user email string default unique id string id name string post post when i run node modules bin prisma generate i get the following error error schema parsing error error parsing attribute relation the relation field author on model post must specify the fields argument in the relation directive schema prisma updatedat datetime author user expected behavior i think i need to add relation fields directive but this error has thrown me off environment setup os osx database postgres prisma version prisma cli node js version
1
5,800
8,641,448,378
IssuesEvent
2018-11-24 17:52:02
carloseduardov8/Viajato
https://api.github.com/repos/carloseduardov8/Viajato
closed
Executar testes de sistema
Priority:High Process:Run Test Case
Verificar se cada cenário foi adequadamente implementado, realizar correções necessárias e gerar artefato sintetizando resultados obtidos.
1.0
Executar testes de sistema - Verificar se cada cenário foi adequadamente implementado, realizar correções necessárias e gerar artefato sintetizando resultados obtidos.
process
executar testes de sistema verificar se cada cenário foi adequadamente implementado realizar correções necessárias e gerar artefato sintetizando resultados obtidos
1
19,109
25,162,644,304
IssuesEvent
2022-11-10 17:59:31
GoogleCloudPlatform/golang-samples
https://api.github.com/repos/GoogleCloudPlatform/golang-samples
closed
kokoro: run_tests.sh always exits 0
priority: p1 type: process samples
https://github.com/GoogleCloudPlatform/golang-samples/blob/d2f99a1ba256d31151ffe895c0e966712e09729f/testing/kokoro/system_tests.sh#L169-L172 ... https://github.com/GoogleCloudPlatform/golang-samples/blob/d2f99a1ba256d31151ffe895c0e966712e09729f/testing/kokoro/system_tests.sh#L247 There are no updates to `exit_code`, so it seems this will always `exit 0`.
1.0
kokoro: run_tests.sh always exits 0 - https://github.com/GoogleCloudPlatform/golang-samples/blob/d2f99a1ba256d31151ffe895c0e966712e09729f/testing/kokoro/system_tests.sh#L169-L172 ... https://github.com/GoogleCloudPlatform/golang-samples/blob/d2f99a1ba256d31151ffe895c0e966712e09729f/testing/kokoro/system_tests.sh#L247 There are no updates to `exit_code`, so it seems this will always `exit 0`.
process
kokoro run tests sh always exits there are no updates to exit code so it seems this will always exit
1
939
3,408,432,845
IssuesEvent
2015-12-04 10:36:20
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
closed
Maps in Github
Closed if no Further Comment Process Improvement
This issue is to discuss how to organize maps into github repositories, whether to group maps into 3 or 4 repositories, or whether to put one map per repository. With multiple maps per repository there are a number of problems that come up: - more difficult to write scripts that work for each repository and every map - difficult to write a sensible build script - no clue how to do map versioning in conjunction with a build script - limited ability to give teams access to just one map, since they'll get access to a full repo and all the maps in there - lots of updates needed in the git work flow. The map makers are pretty okay learning how to do a git clone, commit and push. Throw in doing merges, merge conflicts, or even just updates and it can be a deal breaker for many. Specifically the git work flow is not so trivial when there potentially a dozen or two touching just 3 or 4 repositories. Team work flows are needed, and those are more complicated than the work flow of someone working by themself. - a map checker script is difficult to write, to verify that each map is correct after being downloaded from github releases With one map per repo, these specific problems go away. If there is concern about cloning everything etc, I can provide a one-liner that will work in git-for-windows that will clone every map repo. That will also clone the map "Project" which will then contain the other admin and control scripts which will hopefully automate everything else. For now there is a script that can convert a map zip into a fully working github repository, complete with teams and a working travis build and release.
1.0
Maps in Github - This issue is to discuss how to organize maps into github repositories, whether to group maps into 3 or 4 repositories, or whether to put one map per repository. With multiple maps per repository there are a number of problems that come up: - more difficult to write scripts that work for each repository and every map - difficult to write a sensible build script - no clue how to do map versioning in conjunction with a build script - limited ability to give teams access to just one map, since they'll get access to a full repo and all the maps in there - lots of updates needed in the git work flow. The map makers are pretty okay learning how to do a git clone, commit and push. Throw in doing merges, merge conflicts, or even just updates and it can be a deal breaker for many. Specifically the git work flow is not so trivial when there potentially a dozen or two touching just 3 or 4 repositories. Team work flows are needed, and those are more complicated than the work flow of someone working by themself. - a map checker script is difficult to write, to verify that each map is correct after being downloaded from github releases With one map per repo, these specific problems go away. If there is concern about cloning everything etc, I can provide a one-liner that will work in git-for-windows that will clone every map repo. That will also clone the map "Project" which will then contain the other admin and control scripts which will hopefully automate everything else. For now there is a script that can convert a map zip into a fully working github repository, complete with teams and a working travis build and release.
process
maps in github this issue is to discuss how to organize maps into github repositories whether to group maps into or repositories or whether to put one map per repository with multiple maps per repository there are a number of problems that come up more difficult to write scripts that work for each repository and every map difficult to write a sensible build script no clue how to do map versioning in conjunction with a build script limited ability to give teams access to just one map since they ll get access to a full repo and all the maps in there lots of updates needed in the git work flow the map makers are pretty okay learning how to do a git clone commit and push throw in doing merges merge conflicts or even just updates and it can be a deal breaker for many specifically the git work flow is not so trivial when there potentially a dozen or two touching just or repositories team work flows are needed and those are more complicated than the work flow of someone working by themself a map checker script is difficult to write to verify that each map is correct after being downloaded from github releases with one map per repo these specific problems go away if there is concern about cloning everything etc i can provide a one liner that will work in git for windows that will clone every map repo that will also clone the map project which will then contain the other admin and control scripts which will hopefully automate everything else for now there is a script that can convert a map zip into a fully working github repository complete with teams and a working travis build and release
1
8,581
4,281,366,618
IssuesEvent
2016-07-15 02:24:08
perl6/doc
https://api.github.com/repos/perl6/doc
closed
in-page always-on 404 checking should not be enabled.
build site
Even with the caching, this should be pushed out to our users and run in JS on page load. If we want to have solid 404 checking, it should be one of: - part of the site build (and tested against the local copy) - part of a standalone script that can be run against either the local copy or the deployed site - opt in JS that developers/helpers can enable and report. The recent caching addition makes this ticket a lower priority, but IMO we should still approach this as part of our build process.
1.0
in-page always-on 404 checking should not be enabled. - Even with the caching, this should be pushed out to our users and run in JS on page load. If we want to have solid 404 checking, it should be one of: - part of the site build (and tested against the local copy) - part of a standalone script that can be run against either the local copy or the deployed site - opt in JS that developers/helpers can enable and report. The recent caching addition makes this ticket a lower priority, but IMO we should still approach this as part of our build process.
non_process
in page always on checking should not be enabled even with the caching this should be pushed out to our users and run in js on page load if we want to have solid checking it should be one of part of the site build and tested against the local copy part of a standalone script that can be run against either the local copy or the deployed site opt in js that developers helpers can enable and report the recent caching addition makes this ticket a lower priority but imo we should still approach this as part of our build process
0
99,788
12,477,457,881
IssuesEvent
2020-05-29 14:59:34
COVID19Tracking/website
https://api.github.com/repos/COVID19Tracking/website
closed
Refactor animation on mobile menu
DESIGN
The mobile menu navigation currently uses [react-expand-animated](https://www.npmjs.com/package/react-expand-animated), but we should move to smoother CSS transitions. Some thoughts: - The menu will *always* have different heights because of user screen zoom, font availability, and state of expanded submenus, so never set a height on the menu in CSS.
1.0
Refactor animation on mobile menu - The mobile menu navigation currently uses [react-expand-animated](https://www.npmjs.com/package/react-expand-animated), but we should move to smoother CSS transitions. Some thoughts: - The menu will *always* have different heights because of user screen zoom, font availability, and state of expanded submenus, so never set a height on the menu in CSS.
non_process
refactor animation on mobile menu the mobile menu navigation currently uses but we should move to smoother css transitions some thoughts the menu will always have different heights because of user screen zoom font availability and state of expanded submenus so never set a height on the menu in css
0
211,704
23,837,740,937
IssuesEvent
2022-09-06 07:42:12
ReutNetzer/Java_Demo
https://api.github.com/repos/ReutNetzer/Java_Demo
opened
derby-10.8.3.0.jar: 2 vulnerabilities (highest severity is: 9.1)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>derby-10.8.3.0.jar</b></p></summary> <p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/ReutNetzer/Java_Demo/commit/90278f7c4acbc318edaafb37a6af0f25912bcfcd">90278f7c4acbc318edaafb37a6af0f25912bcfcd</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2015-1832](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.1 | derby-10.8.3.0.jar | Direct | 10.12.1.1 | &#9989; | | [CVE-2018-1313](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1313) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | derby-10.8.3.0.jar | Direct | 10.14.2.0 | &#9989; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2015-1832</summary> ### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p> <p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p> <p> Dependency Hierarchy: - :x: **derby-10.8.3.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ReutNetzer/Java_Demo/commit/90278f7c4acbc318edaafb37a6af0f25912bcfcd">90278f7c4acbc318edaafb37a6af0f25912bcfcd</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby before 10.12.1.1, when a Java Security Manager is not in place, allows context-dependent attackers to read arbitrary files or cause a denial of service (resource consumption) via vectors involving XmlVTI and the XML datatype. <p>Publish Date: 2016-10-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832>CVE-2015-1832</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.1</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832</a></p> <p>Release Date: 2016-10-03</p> <p>Fix Resolution: 10.12.1.1</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-1313</summary> ### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p> <p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p> <p> Dependency Hierarchy: - :x: **derby-10.8.3.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ReutNetzer/Java_Demo/commit/90278f7c4acbc318edaafb37a6af0f25912bcfcd">90278f7c4acbc318edaafb37a6af0f25912bcfcd</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> In Apache Derby 10.3.1.4 to 10.14.1.0, a specially-crafted network packet can be used to request the Derby Network Server to boot a database whose location and contents are under the user's control. If the Derby Network Server is not running with a Java Security Manager policy file, the attack is successful. If the server is using a policy file, the policy file must permit the database location to be read for the attack to work. The default Derby Network Server policy file distributed with the affected releases includes a permissive policy as the default Network Server policy, which allows the attack to work. <p>Publish Date: 2018-05-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1313>CVE-2018-1313</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313</a></p> <p>Release Date: 2018-05-07</p> <p>Fix Resolution: 10.14.2.0</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
True
derby-10.8.3.0.jar: 2 vulnerabilities (highest severity is: 9.1) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>derby-10.8.3.0.jar</b></p></summary> <p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p> <p> <p>Found in HEAD commit: <a href="https://github.com/ReutNetzer/Java_Demo/commit/90278f7c4acbc318edaafb37a6af0f25912bcfcd">90278f7c4acbc318edaafb37a6af0f25912bcfcd</a></p></details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | --- | --- | | [CVE-2015-1832](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.1 | derby-10.8.3.0.jar | Direct | 10.12.1.1 | &#9989; | | [CVE-2018-1313](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1313) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | derby-10.8.3.0.jar | Direct | 10.14.2.0 | &#9989; | ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2015-1832</summary> ### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p> <p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p> <p> Dependency Hierarchy: - :x: **derby-10.8.3.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ReutNetzer/Java_Demo/commit/90278f7c4acbc318edaafb37a6af0f25912bcfcd">90278f7c4acbc318edaafb37a6af0f25912bcfcd</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> XML external entity (XXE) vulnerability in the SqlXmlUtil code in Apache Derby before 10.12.1.1, when a Java Security Manager is not in place, allows context-dependent attackers to read arbitrary files or cause a denial of service (resource consumption) via vectors involving XmlVTI and the XML datatype. <p>Publish Date: 2016-10-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-1832>CVE-2015-1832</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.1</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1832</a></p> <p>Release Date: 2016-10-03</p> <p>Fix Resolution: 10.12.1.1</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2018-1313</summary> ### Vulnerable Library - <b>derby-10.8.3.0.jar</b></p> <p>Contains the core Apache Derby database engine, which also includes the embedded JDBC driver.</p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /repository/org/apache/derby/derby/10.8.3.0/derby-10.8.3.0.jar</p> <p> Dependency Hierarchy: - :x: **derby-10.8.3.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ReutNetzer/Java_Demo/commit/90278f7c4acbc318edaafb37a6af0f25912bcfcd">90278f7c4acbc318edaafb37a6af0f25912bcfcd</a></p> <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> In Apache Derby 10.3.1.4 to 10.14.1.0, a specially-crafted network packet can be used to request the Derby Network Server to boot a database whose location and contents are under the user's control. If the Derby Network Server is not running with a Java Security Manager policy file, the attack is successful. If the server is using a policy file, the policy file must permit the database location to be read for the attack to work. The default Derby Network Server policy file distributed with the affected releases includes a permissive policy as the default Network Server policy, which allows the attack to work. <p>Publish Date: 2018-05-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1313>CVE-2018-1313</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>5.3</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1313</a></p> <p>Release Date: 2018-05-07</p> <p>Fix Resolution: 10.14.2.0</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
non_process
derby jar vulnerabilities highest severity is vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver path to dependency file pom xml path to vulnerable library repository org apache derby derby derby jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high derby jar direct medium derby jar direct details cve vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver path to dependency file pom xml path to vulnerable library repository org apache derby derby derby jar dependency hierarchy x derby jar vulnerable library found in head commit a href found in base branch main vulnerability details xml external entity xxe vulnerability in the sqlxmlutil code in apache derby before when a java security manager is not in place allows context dependent attackers to read arbitrary files or cause a denial of service resource consumption via vectors involving xmlvti and the xml datatype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library derby jar contains the core apache derby database engine which also includes the embedded jdbc driver path to dependency file pom xml path to vulnerable library repository org apache derby derby derby jar dependency hierarchy x derby jar vulnerable library found in head commit a href found in base branch main vulnerability details in apache derby to a specially crafted network packet can be used to request the derby network server to boot a database whose location and contents are under the user s control if the derby network server is not running with a java security manager policy file the attack is successful if the server is using a policy file the policy file must permit the database location to be read for the attack to work the default derby network server policy file distributed with the affected releases includes a permissive policy as the default network server policy which allows the attack to work publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue
0
54,362
11,220,285,996
IssuesEvent
2020-01-07 15:31:34
dotnet/docs
https://api.github.com/repos/dotnet/docs
closed
Add the dotnet/runtime .editorconfig to our samples repo
discussion doc-idea sample-code up-for-grabs
We've long stated that we generally follow the ]dotnet/runtime coding standard](https://github.com/dotnet/runtime/blob/master/docs/coding-guidelines/coding-style.md). They've not added an [.editorconfig](https://github.com/dotnet/runtime/blob/master/.editorconfig) that enforces these guidelines. We should add it to our samples repo.
1.0
Add the dotnet/runtime .editorconfig to our samples repo - We've long stated that we generally follow the ]dotnet/runtime coding standard](https://github.com/dotnet/runtime/blob/master/docs/coding-guidelines/coding-style.md). They've not added an [.editorconfig](https://github.com/dotnet/runtime/blob/master/.editorconfig) that enforces these guidelines. We should add it to our samples repo.
non_process
add the dotnet runtime editorconfig to our samples repo we ve long stated that we generally follow the dotnet runtime coding standard they ve not added an that enforces these guidelines we should add it to our samples repo
0
705,618
24,241,610,462
IssuesEvent
2022-09-27 07:13:31
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
firefox-source-docs.mozilla.org - see bug description
browser-firefox priority-critical engine-gecko
<!-- @browser: Firefox 104.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:104.0) Gecko/20100101 Firefox/104.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/111389 --> **URL**: https://firefox-source-docs.mozilla.org/devtools-user/page_inspector/how_to/examine_grid_layouts/index.html **Browser / Version**: Firefox 104.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: Possibly wrong image used **Steps to Reproduce**: The second image on the page, css-pane1.png, under "In the CSS Pane", I believe is the wrong image. The text says, "any instance of a display: grid declaration gets a grid icon included within it: image1. The CSS pane of the Firefox devtools, showing the CSS for a grid layout with a grid icon included next to display: grid" BUT the image shows "display: flex" with the flex icon, not the grid icon. Perhaps this should be img css-pane2.png? <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/f1f03f4c-df1a-41f9-a037-f52aa01b0e34.jpg"> </details> <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/71d6df1a-5bd3-4108-9891-7bd909bace79.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
firefox-source-docs.mozilla.org - see bug description - <!-- @browser: Firefox 104.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:104.0) Gecko/20100101 Firefox/104.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/111389 --> **URL**: https://firefox-source-docs.mozilla.org/devtools-user/page_inspector/how_to/examine_grid_layouts/index.html **Browser / Version**: Firefox 104.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: Possibly wrong image used **Steps to Reproduce**: The second image on the page, css-pane1.png, under "In the CSS Pane", I believe is the wrong image. The text says, "any instance of a display: grid declaration gets a grid icon included within it: image1. The CSS pane of the Firefox devtools, showing the CSS for a grid layout with a grid icon included next to display: grid" BUT the image shows "display: flex" with the flex icon, not the grid icon. Perhaps this should be img css-pane2.png? <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/f1f03f4c-df1a-41f9-a037-f52aa01b0e34.jpg"> </details> <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/71d6df1a-5bd3-4108-9891-7bd909bace79.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
firefox source docs mozilla org see bug description url browser version firefox operating system windows tested another browser yes chrome problem type something else description possibly wrong image used steps to reproduce the second image on the page css png under in the css pane i believe is the wrong image the text says any instance of a display grid declaration gets a grid icon included within it the css pane of the firefox devtools showing the css for a grid layout with a grid icon included next to display grid but the image shows display flex with the flex icon not the grid icon perhaps this should be img css png view the screenshot img alt screenshot src view the screenshot img alt screenshot src browser configuration none from with ❤️
0
773,033
27,143,697,536
IssuesEvent
2023-02-16 18:11:10
l7mp/stunner-gateway-operator
https://api.github.com/repos/l7mp/stunner-gateway-operator
opened
Feature request: Take auth credentials from a Secret
priority: high type: enhancement
Currently the only way to set the authentication credentials is in plain text in the GatewayConfig. This is not optimal for security reasons and it makes automatic deployment difficult. Plan: let the GatewayConfig refer to a Secret for the auth credentials and generate the stunner config from there. Note that the stunnerd config will still include the credentials in plain text.
1.0
Feature request: Take auth credentials from a Secret - Currently the only way to set the authentication credentials is in plain text in the GatewayConfig. This is not optimal for security reasons and it makes automatic deployment difficult. Plan: let the GatewayConfig refer to a Secret for the auth credentials and generate the stunner config from there. Note that the stunnerd config will still include the credentials in plain text.
non_process
feature request take auth credentials from a secret currently the only way to set the authentication credentials is in plain text in the gatewayconfig this is not optimal for security reasons and it makes automatic deployment difficult plan let the gatewayconfig refer to a secret for the auth credentials and generate the stunner config from there note that the stunnerd config will still include the credentials in plain text
0
198,323
14,972,999,560
IssuesEvent
2021-01-28 00:03:12
microsoft/BotFramework-FunctionalTests
https://api.github.com/repos/microsoft/BotFramework-FunctionalTests
closed
Add "not supported" messages for expect replies cases in V3 bots
Area: Functional tests P2 Size: M feature-request
This issue involves updating the host bots to display a not supported message when the user selects expectReplies for V3 skills (JS or dotnet).
1.0
Add "not supported" messages for expect replies cases in V3 bots - This issue involves updating the host bots to display a not supported message when the user selects expectReplies for V3 skills (JS or dotnet).
non_process
add not supported messages for expect replies cases in bots this issue involves updating the host bots to display a not supported message when the user selects expectreplies for skills js or dotnet
0
535,141
15,682,990,444
IssuesEvent
2021-03-25 08:10:02
vortexntnu/Vortex-AUV
https://api.github.com/repos/vortexntnu/Vortex-AUV
closed
Thruster interface should adjust to batteries charge state
enhancement low priority
The battery charge state has a significant impact on the thrust delivered for a given PWM signal. Making the thruster interface choose the correct thrust to PWM mapping for the given charge state would ensure much more precise delivery of thrust.
1.0
Thruster interface should adjust to batteries charge state - The battery charge state has a significant impact on the thrust delivered for a given PWM signal. Making the thruster interface choose the correct thrust to PWM mapping for the given charge state would ensure much more precise delivery of thrust.
non_process
thruster interface should adjust to batteries charge state the battery charge state has a significant impact on the thrust delivered for a given pwm signal making the thruster interface choose the correct thrust to pwm mapping for the given charge state would ensure much more precise delivery of thrust
0
30,781
4,652,632,918
IssuesEvent
2016-10-03 14:34:53
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
Доработать механизм подписания заявки, чтоб так-же подписывались и каждый из прикрепленных к ней файл
active test _central-js
В продолжение реализации задачи: https://github.com/e-government-ua/i/issues/843 - [x] 1) Вместо одного мультипартфайла. слать их последовательность разделенных указанным в документации разделителем - [x] 2) При приеме подписанного контента сплитить так-же через указанный разделитель, при этом все кроме подписанной самой формы размещать в редисе вместо того что прикреплено на форме, и в соответственных полях с типом файл проставить новые ИД-шники (только после этого сабмитить) Документация: https://drive.google.com/file/d/0B25HjS4OnHuPbWRHV1N2RmJ5a1E/view?usp=sharing Мульти-подписание - стр. 21
1.0
Доработать механизм подписания заявки, чтоб так-же подписывались и каждый из прикрепленных к ней файл - В продолжение реализации задачи: https://github.com/e-government-ua/i/issues/843 - [x] 1) Вместо одного мультипартфайла. слать их последовательность разделенных указанным в документации разделителем - [x] 2) При приеме подписанного контента сплитить так-же через указанный разделитель, при этом все кроме подписанной самой формы размещать в редисе вместо того что прикреплено на форме, и в соответственных полях с типом файл проставить новые ИД-шники (только после этого сабмитить) Документация: https://drive.google.com/file/d/0B25HjS4OnHuPbWRHV1N2RmJ5a1E/view?usp=sharing Мульти-подписание - стр. 21
non_process
доработать механизм подписания заявки чтоб так же подписывались и каждый из прикрепленных к ней файл в продолжение реализации задачи вместо одного мультипартфайла слать их последовательность разделенных указанным в документации разделителем при приеме подписанного контента сплитить так же через указанный разделитель при этом все кроме подписанной самой формы размещать в редисе вместо того что прикреплено на форме и в соответственных полях с типом файл проставить новые ид шники только после этого сабмитить документация мульти подписание стр
0
21,506
29,670,896,282
IssuesEvent
2023-06-11 11:50:53
firebase/firebase-cpp-sdk
https://api.github.com/repos/firebase/firebase-cpp-sdk
closed
[C++] Nightly Integration Testing Report for Firestore
type: process nightly-testing
<hidden value="integration-test-status-comment"></hidden> ### ❌&nbsp; [build against repo] Integration test FAILED Requested by @sunmou99 on commit 04c7345024df2e930c9f081638cf5bd1bd1b1e55 Last updated: Sat Jun 10 04:45 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5229363364)** | Failures | Configs | |----------|---------| | firestore | [TEST] [FAILURE] [Linux] [1/2 ssl_lib: x64] [1/2 build_type: boringssl]<details><summary>(1 failed tests)</summary>&nbsp;&nbsp;QueryNetworkTest.TestWatchSurvivesNetworkDisconnect</details>[TEST] [FAILURE] [Windows] [1/2 ssl_lib: x64] [boringssl]<details><summary>(1 failed tests)</summary>&nbsp;&nbsp;ServerTimestampTest.TestServerTimestampsUsesPreviousValueFromLocalMutation</details> | Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit 04c7345024df2e930c9f081638cf5bd1bd1b1e55 Last updated: Sat Jun 10 07:45 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5230141059)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against tip] Integration test succeeded! Requested by @sunmou99 on commit c3afeae7800f06f786e8018add11be6fb3169715 Last updated: Sun Jun 11 04:49 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5235055227)**
1.0
[C++] Nightly Integration Testing Report for Firestore - <hidden value="integration-test-status-comment"></hidden> ### ❌&nbsp; [build against repo] Integration test FAILED Requested by @sunmou99 on commit 04c7345024df2e930c9f081638cf5bd1bd1b1e55 Last updated: Sat Jun 10 04:45 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5229363364)** | Failures | Configs | |----------|---------| | firestore | [TEST] [FAILURE] [Linux] [1/2 ssl_lib: x64] [1/2 build_type: boringssl]<details><summary>(1 failed tests)</summary>&nbsp;&nbsp;QueryNetworkTest.TestWatchSurvivesNetworkDisconnect</details>[TEST] [FAILURE] [Windows] [1/2 ssl_lib: x64] [boringssl]<details><summary>(1 failed tests)</summary>&nbsp;&nbsp;ServerTimestampTest.TestServerTimestampsUsesPreviousValueFromLocalMutation</details> | Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit 04c7345024df2e930c9f081638cf5bd1bd1b1e55 Last updated: Sat Jun 10 07:45 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5230141059)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against tip] Integration test succeeded! Requested by @sunmou99 on commit c3afeae7800f06f786e8018add11be6fb3169715 Last updated: Sun Jun 11 04:49 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/5235055227)**
process
nightly integration testing report for firestore ❌ nbsp integration test failed requested by on commit last updated sat jun pdt failures configs firestore failed tests nbsp nbsp querynetworktest testwatchsurvivesnetworkdisconnect failed tests nbsp nbsp servertimestamptest testservertimestampsusespreviousvaluefromlocalmutation add flaky tests to ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated sat jun pdt ✅ nbsp integration test succeeded requested by on commit last updated sun jun pdt
1
12,588
14,991,830,257
IssuesEvent
2021-01-29 08:59:12
panther-labs/panther
https://api.github.com/repos/panther-labs/panther
closed
Create framework for publishing system metrics
p1 story team:data processing
### Description Create a framework that our code can use to publish system metrics. ### Related Services All ### Designs Not needed. ### Acceptance Criteria * Implementation of a metrics framework that our system can use to publish metrics to CloudWatch * The framework should make use of CloudWatch Embedded metrics format * The framework should allow performance-sensitive parts of the system to record metrics.
1.0
Create framework for publishing system metrics - ### Description Create a framework that our code can use to publish system metrics. ### Related Services All ### Designs Not needed. ### Acceptance Criteria * Implementation of a metrics framework that our system can use to publish metrics to CloudWatch * The framework should make use of CloudWatch Embedded metrics format * The framework should allow performance-sensitive parts of the system to record metrics.
process
create framework for publishing system metrics description create a framework that our code can use to publish system metrics related services all designs not needed acceptance criteria implementation of a metrics framework that our system can use to publish metrics to cloudwatch the framework should make use of cloudwatch embedded metrics format the framework should allow performance sensitive parts of the system to record metrics
1
15,268
19,248,452,104
IssuesEvent
2021-12-09 00:53:57
shirou/gopsutil
https://api.github.com/repos/shirou/gopsutil
closed
Wrong CPUPercent reports
os:linux package:process
**Describe the bug** I found that monitoring of process CPU percent is wrong if I use Go server which runs several goroutines and monitor it use gopsutil `proc.CPUPercent` function. Here is a proof; ``` # on k8s server I have a web server with PID 28 # below I dump usage of CPU using top and proc_mon.go code (see below) # as can be seen from several attempts the %CPU report numbers are quite different # I executed side-by-side top and gopsutil code top -p 28 -n 1 && ./proc_mon -pid 28 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28 root 20 0 2884316 57560 28444 S 0.0 0.2 4:11.71 dbs2go 2020/12/07 21:26:06 PID 28, cpu 44.307152107818155 top -p 28 -n 1 && ./proc_mon -pid 28 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28 root 20 0 2884316 57792 28444 S 280.0 0.2 4:30.29 dbs2go 2020/12/07 21:26:54 PID 28, cpu 43.751437932120716 top -p 28 -n 1 && ./proc_mon -pid 28 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28 root 20 0 2884316 57792 28444 S 293.3 0.2 7:07.27 dbs2go 2020/12/07 21:28:01 PID 28, cpu 62.544830080650456 top -p 28 -n 1 && ./proc_mon -pid 28 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28 root 20 0 2958048 57816 28444 S 0.0 0.2 8:33.91 dbs2go 2020/12/07 21:28:35 PID 28, cpu 71.51142184852769 ``` **To Reproduce** ``` # proc_mon.go codebase package main import ( "flag" "log" "github.com/shirou/gopsutil/process" ) func main() { var pid int flag.IntVar(&pid, "pid", 0, "pid to watch") flag.Parse() run(pid) } func run(pid int) { if proc, err := process.NewProcess(int32(pid)); err == nil { if v, e := proc.CPUPercent(); e == nil { log.Printf("PID %d, cpu %v\n", pid, v) } } } ``` **Expected behavior** I'm not sure about logic of CPUPercent function but it should match values reported by top. If I run a single process then I see that top and gopsutil code is consistent. When I run Go-based server code which spawn multiple goroutines then I see very weird numbers reported by gopsutil codebase. All tests above were done on kubernetes cluster running Linux and web server code is provisioned to use 4 cores. **Environment (please complete the following information):** - Linux: [paste contents of `/etc/os-release` and the result of `uname -a`] CentOS Linux 7 (Core) Linux dbs2go-75cd7bd46-9mlx5 5.4.8-200.fc31.x86_64 #1 SMP Mon Jan 6 16:44:18 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux and I run pod under root account. **Additional context** Code is compiled with go version 1.15.2
1.0
Wrong CPUPercent reports - **Describe the bug** I found that monitoring of process CPU percent is wrong if I use Go server which runs several goroutines and monitor it use gopsutil `proc.CPUPercent` function. Here is a proof; ``` # on k8s server I have a web server with PID 28 # below I dump usage of CPU using top and proc_mon.go code (see below) # as can be seen from several attempts the %CPU report numbers are quite different # I executed side-by-side top and gopsutil code top -p 28 -n 1 && ./proc_mon -pid 28 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28 root 20 0 2884316 57560 28444 S 0.0 0.2 4:11.71 dbs2go 2020/12/07 21:26:06 PID 28, cpu 44.307152107818155 top -p 28 -n 1 && ./proc_mon -pid 28 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28 root 20 0 2884316 57792 28444 S 280.0 0.2 4:30.29 dbs2go 2020/12/07 21:26:54 PID 28, cpu 43.751437932120716 top -p 28 -n 1 && ./proc_mon -pid 28 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28 root 20 0 2884316 57792 28444 S 293.3 0.2 7:07.27 dbs2go 2020/12/07 21:28:01 PID 28, cpu 62.544830080650456 top -p 28 -n 1 && ./proc_mon -pid 28 PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28 root 20 0 2958048 57816 28444 S 0.0 0.2 8:33.91 dbs2go 2020/12/07 21:28:35 PID 28, cpu 71.51142184852769 ``` **To Reproduce** ``` # proc_mon.go codebase package main import ( "flag" "log" "github.com/shirou/gopsutil/process" ) func main() { var pid int flag.IntVar(&pid, "pid", 0, "pid to watch") flag.Parse() run(pid) } func run(pid int) { if proc, err := process.NewProcess(int32(pid)); err == nil { if v, e := proc.CPUPercent(); e == nil { log.Printf("PID %d, cpu %v\n", pid, v) } } } ``` **Expected behavior** I'm not sure about logic of CPUPercent function but it should match values reported by top. If I run a single process then I see that top and gopsutil code is consistent. When I run Go-based server code which spawn multiple goroutines then I see very weird numbers reported by gopsutil codebase. All tests above were done on kubernetes cluster running Linux and web server code is provisioned to use 4 cores. **Environment (please complete the following information):** - Linux: [paste contents of `/etc/os-release` and the result of `uname -a`] CentOS Linux 7 (Core) Linux dbs2go-75cd7bd46-9mlx5 5.4.8-200.fc31.x86_64 #1 SMP Mon Jan 6 16:44:18 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux and I run pod under root account. **Additional context** Code is compiled with go version 1.15.2
process
wrong cpupercent reports describe the bug i found that monitoring of process cpu percent is wrong if i use go server which runs several goroutines and monitor it use gopsutil proc cpupercent function here is a proof on server i have a web server with pid below i dump usage of cpu using top and proc mon go code see below as can be seen from several attempts the cpu report numbers are quite different i executed side by side top and gopsutil code top p n proc mon pid pid user pr ni virt res shr s cpu mem time command root s pid cpu top p n proc mon pid pid user pr ni virt res shr s cpu mem time command root s pid cpu top p n proc mon pid pid user pr ni virt res shr s cpu mem time command root s pid cpu top p n proc mon pid pid user pr ni virt res shr s cpu mem time command root s pid cpu to reproduce proc mon go codebase package main import flag log github com shirou gopsutil process func main var pid int flag intvar pid pid pid to watch flag parse run pid func run pid int if proc err process newprocess pid err nil if v e proc cpupercent e nil log printf pid d cpu v n pid v expected behavior i m not sure about logic of cpupercent function but it should match values reported by top if i run a single process then i see that top and gopsutil code is consistent when i run go based server code which spawn multiple goroutines then i see very weird numbers reported by gopsutil codebase all tests above were done on kubernetes cluster running linux and web server code is provisioned to use cores environment please complete the following information linux centos linux core linux smp mon jan utc gnu linux and i run pod under root account additional context code is compiled with go version
1
78,265
14,973,486,989
IssuesEvent
2021-01-28 01:11:55
aws-samples/aws-secure-environment-accelerator
https://api.github.com/repos/aws-samples/aws-secure-environment-accelerator
closed
[BUG][Functional] PBMMAccel-CreateAccountEventTrigger_sm fails when IAM user creates an AWS Account
1-Codebase 2-Bug/Issue 3-Work In Progress
Bug reports which fail to provide the required information will be closed without action. **Required Basic Info** - Accelerator Version: v1.2.2 - Install Type: Clean - Install Branch: Standalone - Upgrade from version: N/A **Describe the bug** When creating a new AWS Account using the AWS Console, the PBMMAccel-CreateAccountEventTrigger_sm fails and does not apply the Quarantine SCP. **Failure Info** - What error messages have you identified, if any: ![image](https://user-images.githubusercontent.com/41204211/105276989-646bc980-5b57-11eb-883e-63b7f713bbe1.png) - What symptoms have you identified, if any: **Required files** - Please provide a copy of your config.json file (sanitize if required) **Steps To Reproduce** 0. Login with an IAM user 1. Go to AWS Organizations 2. Click on New Account 3. Fill in all details 4. Navigate to us-east-1 and view the state machine **Expected behavior** The state machine runs without failure **Additional context** Here is the scrubbed input JSON. ``` { "version": "0", "id": "ddc43eeb-c8ea-771c-cbdf-ff229fbdeddc", "detail-type": "AWS API Call via CloudTrail", "source": "aws.organizations", "account": "2xxxxxxxx173", "time": "2021-01-20T01:23:42Z", "region": "us-east-1", "resources": [], "detail": { "eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "AIDAxxxxxxxxxxO5WX", "arn": "arn:aws:iam::2xxxxxxxx173:user/AWS-ADMIN-xxxxxxx", "accountId": "2xxxxxxxx173", "accessKeyId": "ASIAxxxxxxxE3J", "userName": "AWS-ADMIN-xxxxxx", "sessionContext": { "sessionIssuer": {}, "webIdFederationData": {}, "attributes": { "mfaAuthenticated": "true", "creationDate": "2021-01-20T01:11:06Z" } } }, "eventTime": "2021-01-20T01:23:42Z", "eventSource": "organizations.amazonaws.com", "eventName": "CreateAccount", "awsRegion": "us-east-1", "sourceIPAddress": "70.xx.xx199", "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36, aws-internal/3 aws-sdk-java/1.11.927 Linux/4.9.217-0.3.ac.206.84.332.metal1.x86_64 OpenJDK_64-Bit_Server_VM/25.275-b01 java/1.8.0_275 vendor/Oracle_Corporation", "requestParameters": { "tags": [], "accountName": "****", "email": "****", "roleName": "AWSCloudFormationStackSetExecutionRole" }, "responseElements": { "createAccountStatus": { "id": "car-218bd3xxxxxxxxxb32ddd3edb", "state": "IN_PROGRESS", "accountName": "****", "requestedTimestamp": "Jan 20, 2021 1:23:42 AM" } }, "requestID": "08ff8318-a266-444b-ac8e-4102eadab3fe", "eventID": "8d7cb2fe-41af-4673-9252-9820487c1455", "readOnly": false, "eventType": "AwsApiCall", "managementEvent": true, "eventCategory": "Management" } } ```
1.0
[BUG][Functional] PBMMAccel-CreateAccountEventTrigger_sm fails when IAM user creates an AWS Account - Bug reports which fail to provide the required information will be closed without action. **Required Basic Info** - Accelerator Version: v1.2.2 - Install Type: Clean - Install Branch: Standalone - Upgrade from version: N/A **Describe the bug** When creating a new AWS Account using the AWS Console, the PBMMAccel-CreateAccountEventTrigger_sm fails and does not apply the Quarantine SCP. **Failure Info** - What error messages have you identified, if any: ![image](https://user-images.githubusercontent.com/41204211/105276989-646bc980-5b57-11eb-883e-63b7f713bbe1.png) - What symptoms have you identified, if any: **Required files** - Please provide a copy of your config.json file (sanitize if required) **Steps To Reproduce** 0. Login with an IAM user 1. Go to AWS Organizations 2. Click on New Account 3. Fill in all details 4. Navigate to us-east-1 and view the state machine **Expected behavior** The state machine runs without failure **Additional context** Here is the scrubbed input JSON. ``` { "version": "0", "id": "ddc43eeb-c8ea-771c-cbdf-ff229fbdeddc", "detail-type": "AWS API Call via CloudTrail", "source": "aws.organizations", "account": "2xxxxxxxx173", "time": "2021-01-20T01:23:42Z", "region": "us-east-1", "resources": [], "detail": { "eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "AIDAxxxxxxxxxxO5WX", "arn": "arn:aws:iam::2xxxxxxxx173:user/AWS-ADMIN-xxxxxxx", "accountId": "2xxxxxxxx173", "accessKeyId": "ASIAxxxxxxxE3J", "userName": "AWS-ADMIN-xxxxxx", "sessionContext": { "sessionIssuer": {}, "webIdFederationData": {}, "attributes": { "mfaAuthenticated": "true", "creationDate": "2021-01-20T01:11:06Z" } } }, "eventTime": "2021-01-20T01:23:42Z", "eventSource": "organizations.amazonaws.com", "eventName": "CreateAccount", "awsRegion": "us-east-1", "sourceIPAddress": "70.xx.xx199", "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36, aws-internal/3 aws-sdk-java/1.11.927 Linux/4.9.217-0.3.ac.206.84.332.metal1.x86_64 OpenJDK_64-Bit_Server_VM/25.275-b01 java/1.8.0_275 vendor/Oracle_Corporation", "requestParameters": { "tags": [], "accountName": "****", "email": "****", "roleName": "AWSCloudFormationStackSetExecutionRole" }, "responseElements": { "createAccountStatus": { "id": "car-218bd3xxxxxxxxxb32ddd3edb", "state": "IN_PROGRESS", "accountName": "****", "requestedTimestamp": "Jan 20, 2021 1:23:42 AM" } }, "requestID": "08ff8318-a266-444b-ac8e-4102eadab3fe", "eventID": "8d7cb2fe-41af-4673-9252-9820487c1455", "readOnly": false, "eventType": "AwsApiCall", "managementEvent": true, "eventCategory": "Management" } } ```
non_process
pbmmaccel createaccounteventtrigger sm fails when iam user creates an aws account bug reports which fail to provide the required information will be closed without action required basic info accelerator version install type clean install branch standalone upgrade from version n a describe the bug when creating a new aws account using the aws console the pbmmaccel createaccounteventtrigger sm fails and does not apply the quarantine scp failure info what error messages have you identified if any what symptoms have you identified if any required files please provide a copy of your config json file sanitize if required steps to reproduce login with an iam user go to aws organizations click on new account fill in all details navigate to us east and view the state machine expected behavior the state machine runs without failure additional context here is the scrubbed input json version id cbdf detail type aws api call via cloudtrail source aws organizations account time region us east resources detail eventversion useridentity type iamuser principalid arn arn aws iam user aws admin xxxxxxx accountid accesskeyid username aws admin xxxxxx sessioncontext sessionissuer webidfederationdata attributes mfaauthenticated true creationdate eventtime eventsource organizations amazonaws com eventname createaccount awsregion us east sourceipaddress xx useragent mozilla windows nt applewebkit khtml like gecko chrome safari aws internal aws sdk java linux ac openjdk bit server vm java vendor oracle corporation requestparameters tags accountname email rolename awscloudformationstacksetexecutionrole responseelements createaccountstatus id car state in progress accountname requestedtimestamp jan am requestid eventid readonly false eventtype awsapicall managementevent true eventcategory management
0
355,253
10,578,043,156
IssuesEvent
2019-10-07 21:31:51
bounswe/bounswe2019group8
https://api.github.com/repos/bounswe/bounswe2019group8
opened
Deciding visual standards
Effort: Medium Frontend Group work Help wanted Mobile Planning Platform: Mobile Platform: Web Priority: High Type: Discussion
**Actions:** 1. DIscuss UI spesifications such that web and mobile pages looks correlated. **Deadline:** 20.10.2019 - 00.00
1.0
Deciding visual standards - **Actions:** 1. DIscuss UI spesifications such that web and mobile pages looks correlated. **Deadline:** 20.10.2019 - 00.00
non_process
deciding visual standards actions discuss ui spesifications such that web and mobile pages looks correlated deadline
0
9,865
12,879,061,049
IssuesEvent
2020-07-11 19:56:34
natario1/CameraView
https://api.github.com/repos/natario1/CameraView
closed
After taking a picture, frame process will not receive any frame in Camera1.
about:camera1 about:frame processing is:bug status:has pr
Hi, In Full1PictureRecorder.java, it stops the preview callback by set to null before takePicture. The preview callback will be set back on onPictureTaken function but forget to re-add callback buffer. Reference: https://developer.android.com/reference/android/hardware/Camera Camera.PreviewCallback: a callback object that receives a copy of the preview frame, or null to stop receiving callbacks and "clear the buffer queue".
1.0
After taking a picture, frame process will not receive any frame in Camera1. - Hi, In Full1PictureRecorder.java, it stops the preview callback by set to null before takePicture. The preview callback will be set back on onPictureTaken function but forget to re-add callback buffer. Reference: https://developer.android.com/reference/android/hardware/Camera Camera.PreviewCallback: a callback object that receives a copy of the preview frame, or null to stop receiving callbacks and "clear the buffer queue".
process
after taking a picture frame process will not receive any frame in hi in java it stops the preview callback by set to null before takepicture the preview callback will be set back on onpicturetaken function but forget to re add callback buffer reference camera previewcallback a callback object that receives a copy of the preview frame or null to stop receiving callbacks and clear the buffer queue
1
42,451
2,870,331,891
IssuesEvent
2015-06-07 02:02:21
DavidWigley/RPGGame
https://api.github.com/repos/DavidWigley/RPGGame
closed
Restructure of Code
enhancement NormalPriority To Do
Although this code works fine, it is fairly messy. The main class is now pusing 2000 lines. We really need to break this all up into its own segment similar to how the player and AI portions are done. It will make it much easier to maintain and implement new features. Instead of doing a chain of objects we should do a web. IE. One class handles them all and try to limit the amount of cross referencing. Everything should be routed through our main class.
1.0
Restructure of Code - Although this code works fine, it is fairly messy. The main class is now pusing 2000 lines. We really need to break this all up into its own segment similar to how the player and AI portions are done. It will make it much easier to maintain and implement new features. Instead of doing a chain of objects we should do a web. IE. One class handles them all and try to limit the amount of cross referencing. Everything should be routed through our main class.
non_process
restructure of code although this code works fine it is fairly messy the main class is now pusing lines we really need to break this all up into its own segment similar to how the player and ai portions are done it will make it much easier to maintain and implement new features instead of doing a chain of objects we should do a web ie one class handles them all and try to limit the amount of cross referencing everything should be routed through our main class
0
10,882
13,652,166,975
IssuesEvent
2020-09-27 05:56:45
metakgp/chillzone
https://api.github.com/repos/metakgp/chillzone
closed
Add Method to parse first year time table from PDF
Hacktoberfest Image Processing good first issue
Currently, we have to manually enter the first year time table. The first year timetable is avaiable in the central timetable PDF; a PDF parsing method can be used to automate the process.
1.0
Add Method to parse first year time table from PDF - Currently, we have to manually enter the first year time table. The first year timetable is avaiable in the central timetable PDF; a PDF parsing method can be used to automate the process.
process
add method to parse first year time table from pdf currently we have to manually enter the first year time table the first year timetable is avaiable in the central timetable pdf a pdf parsing method can be used to automate the process
1
2,066
4,876,159,934
IssuesEvent
2016-11-16 11:54:23
nodejs/node
https://api.github.com/repos/nodejs/node
closed
child_process.spawn does not handle arguments properly
child_process
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v6.6.0 * **Platform**: Darwin elk.local 16.1.0 Darwin Kernel Version 16.1.0: Thu Oct 13 21:26:57 PDT 2016; root:xnu-3789.21.3~60/RELEASE_X86_64 x86_64 * **Subsystem**: macOS <!-- Enter your issue details below this comment. --> Hi, I am running into problem when spawn does not properly pass arguments to process. A simple command that I am trying to run: `git add '/Users/pronebird/Library/Application Support/HelloWorld/File.js'` ```js const file = '/Users/pronebird/Library/Application Support/HelloWorld/File.js'; const child = spawn('git', ['add', file ]); child.stdout.on('data', (data) => { console.log(`stdout: ${data}`); }); child.stderr.on('data', (data) => { console.log(`stderr: ${data}`); }); child.on('error', (err) => { console.log(`Failed to start ${executable}: ${err}`); }); child.on('close', (code) => { console.log(`Child process ${executable} exited with ${code}`); }); ``` Results in error from git. I can verify that command works from bash. I thought that the problem was in escaping path manually, so I tried to add \ before spaces in path, and wrap the entire path in single and double quotes. But none of this worked.
1.0
child_process.spawn does not handle arguments properly - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v6.6.0 * **Platform**: Darwin elk.local 16.1.0 Darwin Kernel Version 16.1.0: Thu Oct 13 21:26:57 PDT 2016; root:xnu-3789.21.3~60/RELEASE_X86_64 x86_64 * **Subsystem**: macOS <!-- Enter your issue details below this comment. --> Hi, I am running into problem when spawn does not properly pass arguments to process. A simple command that I am trying to run: `git add '/Users/pronebird/Library/Application Support/HelloWorld/File.js'` ```js const file = '/Users/pronebird/Library/Application Support/HelloWorld/File.js'; const child = spawn('git', ['add', file ]); child.stdout.on('data', (data) => { console.log(`stdout: ${data}`); }); child.stderr.on('data', (data) => { console.log(`stderr: ${data}`); }); child.on('error', (err) => { console.log(`Failed to start ${executable}: ${err}`); }); child.on('close', (code) => { console.log(`Child process ${executable} exited with ${code}`); }); ``` Results in error from git. I can verify that command works from bash. I thought that the problem was in escaping path manually, so I tried to add \ before spaces in path, and wrap the entire path in single and double quotes. But none of this worked.
process
child process spawn does not handle arguments properly thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform darwin elk local darwin kernel version thu oct pdt root xnu release subsystem macos hi i am running into problem when spawn does not properly pass arguments to process a simple command that i am trying to run git add users pronebird library application support helloworld file js js const file users pronebird library application support helloworld file js const child spawn git child stdout on data data console log stdout data child stderr on data data console log stderr data child on error err console log failed to start executable err child on close code console log child process executable exited with code results in error from git i can verify that command works from bash i thought that the problem was in escaping path manually so i tried to add before spaces in path and wrap the entire path in single and double quotes but none of this worked
1
14,635
17,769,068,683
IssuesEvent
2021-08-30 11:26:12
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
closed
Label Encoder does not work in the new sklearn version
module:preprocessing
https://github.com/scikit-learn/scikit-learn/blob/2beed55847ee70d363bdbfe14ee4401438fba057/sklearn/preprocessing/_label.py#L38 Tested the following algorithim: ``` from sklearn import preprocessing le = preprocessing.LabelEncoder() le.fit([1, 2, 2, 6]) ``` The above code produces the following error: 237 # because the list is converted to Unicode numpy array 238 if isinstance(X, list) and \ --> 239 any(isinstance(elem, str) for row in X for elem in row): 240 dtype = object 241 else: TypeError: 'int' object is not iterable However, this is the example provided in the sklearn example. Edit: Here is the sklearn version ``` System: python: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] executable: /home/xxxxxxxxxxxxxxxxx/env/bin/python machine: Linux-5.4.0-77-generic-x86_64-with-glibc2.10 Python dependencies: pip: 20.1.1 setuptools: 47.1.0 sklearn: 0.24.2 numpy: 1.19.5 scipy: 1.6.3 Cython: None pandas: 1.2.4 matplotlib: 3.4.2 joblib: 1.0.1 threadpoolctl: 2.1.0 ```
1.0
Label Encoder does not work in the new sklearn version - https://github.com/scikit-learn/scikit-learn/blob/2beed55847ee70d363bdbfe14ee4401438fba057/sklearn/preprocessing/_label.py#L38 Tested the following algorithim: ``` from sklearn import preprocessing le = preprocessing.LabelEncoder() le.fit([1, 2, 2, 6]) ``` The above code produces the following error: 237 # because the list is converted to Unicode numpy array 238 if isinstance(X, list) and \ --> 239 any(isinstance(elem, str) for row in X for elem in row): 240 dtype = object 241 else: TypeError: 'int' object is not iterable However, this is the example provided in the sklearn example. Edit: Here is the sklearn version ``` System: python: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] executable: /home/xxxxxxxxxxxxxxxxx/env/bin/python machine: Linux-5.4.0-77-generic-x86_64-with-glibc2.10 Python dependencies: pip: 20.1.1 setuptools: 47.1.0 sklearn: 0.24.2 numpy: 1.19.5 scipy: 1.6.3 Cython: None pandas: 1.2.4 matplotlib: 3.4.2 joblib: 1.0.1 threadpoolctl: 2.1.0 ```
process
label encoder does not work in the new sklearn version tested the following algorithim from sklearn import preprocessing le preprocessing labelencoder le fit the above code produces the following error because the list is converted to unicode numpy array if isinstance x list and any isinstance elem str for row in x for elem in row dtype object else typeerror int object is not iterable however this is the example provided in the sklearn example edit here is the sklearn version system python default sep executable home xxxxxxxxxxxxxxxxx env bin python machine linux generic with python dependencies pip setuptools sklearn numpy scipy cython none pandas matplotlib joblib threadpoolctl
1
629,367
20,030,438,027
IssuesEvent
2022-02-02 04:44:52
minio/minio-js
https://api.github.com/repos/minio/minio-js
closed
Support IAM for Service Account
priority: medium community
aws add [IAM For Service Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) feature. It uses Web Identity Credentials and this is not support in minio-js yet. It would be better to bump SDK version or add this resolver. minio-go changes: https://github.com/minio/minio-go/pull/1183 https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/WebIdentityCredentials.html
1.0
Support IAM for Service Account - aws add [IAM For Service Account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) feature. It uses Web Identity Credentials and this is not support in minio-js yet. It would be better to bump SDK version or add this resolver. minio-go changes: https://github.com/minio/minio-go/pull/1183 https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/WebIdentityCredentials.html
non_process
support iam for service account aws add feature it uses web identity credentials and this is not support in minio js yet it would be better to bump sdk version or add this resolver minio go changes
0
101,881
8,806,667,292
IssuesEvent
2018-12-27 05:47:13
humera987/FXLabs-Test-Automation
https://api.github.com/repos/humera987/FXLabs-Test-Automation
closed
testing 3 : ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound
testing 3
Project : testing 3 Job : Default Env : Default Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZDAyYmUwZjEtNDgzNy00MGNjLTliZGEtYWM2NTdkMzUwNDcz; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 27 Dec 2018 05:39:26 GMT]} Endpoint : http://13.56.210.25/api/v1/projects//autocodeconfig/add-abacpositive-rules Request : Response : { "timestamp" : "2018-12-27T05:39:26.573+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/projects/autocodeconfig/add-abacpositive-rules" } Logs : 2018-12-27 05:39:25 ERROR [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : No module found with the name [FXLabs/Common/MySQL-TimeBound-SQL_Injection_Strings] defined in Fxfile.yaml 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : URL [http://13.56.210.25/api/v1/projects//autocodeconfig/add-abacpositive-rules] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Method [GET] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Request [] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic SHVtZXJhLy9odW1lcmFAZnhsYWJzLmlvOmh1bWVyYTEyMyQ=]}] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Response [{ "timestamp" : "2018-12-27T05:39:26.573+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/projects/autocodeconfig/add-abacpositive-rules" }] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Response-Headers [{X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZDAyYmUwZjEtNDgzNy00MGNjLTliZGEtYWM2NTdkMzUwNDcz; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 27 Dec 2018 05:39:26 GMT]}] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : StatusCode [404] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Time [1143] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Size [174] 2018-12-27 05:39:26 INFO [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [1143 < 7000 OR 1143 > 10000] result [Passed] 2018-12-27 05:39:26 ERROR [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
1.0
testing 3 : ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound - Project : testing 3 Job : Default Env : Default Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZDAyYmUwZjEtNDgzNy00MGNjLTliZGEtYWM2NTdkMzUwNDcz; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 27 Dec 2018 05:39:26 GMT]} Endpoint : http://13.56.210.25/api/v1/projects//autocodeconfig/add-abacpositive-rules Request : Response : { "timestamp" : "2018-12-27T05:39:26.573+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/projects/autocodeconfig/add-abacpositive-rules" } Logs : 2018-12-27 05:39:25 ERROR [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : No module found with the name [FXLabs/Common/MySQL-TimeBound-SQL_Injection_Strings] defined in Fxfile.yaml 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : URL [http://13.56.210.25/api/v1/projects//autocodeconfig/add-abacpositive-rules] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Method [GET] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Request [] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Request-Headers [{Content-Type=[application/json], Accept=[application/json], Authorization=[Basic SHVtZXJhLy9odW1lcmFAZnhsYWJzLmlvOmh1bWVyYTEyMyQ=]}] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Response [{ "timestamp" : "2018-12-27T05:39:26.573+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/projects/autocodeconfig/add-abacpositive-rules" }] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Response-Headers [{X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZDAyYmUwZjEtNDgzNy00MGNjLTliZGEtYWM2NTdkMzUwNDcz; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Thu, 27 Dec 2018 05:39:26 GMT]}] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : StatusCode [404] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Time [1143] 2018-12-27 05:39:26 DEBUG [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Size [174] 2018-12-27 05:39:26 INFO [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [1143 < 7000 OR 1143 > 10000] result [Passed] 2018-12-27 05:39:26 ERROR [ApiV1ProjectsProjectidAutocodeconfigAddAbacpositiveRulesGetPathParamProjectidMysqlSqlInjectionTimebound] : Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
non_process
testing project testing job default env default region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api projects autocodeconfig add abacpositive rules logs error no module found with the name defined in fxfile yaml debug url debug method debug request debug request headers accept authorization debug response timestamp status error not found message no message available path api projects autocodeconfig add abacpositive rules debug response headers x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date debug statuscode debug time debug size info assertion resolved to result error assertion resolved to result fx bot
0
231,175
7,624,577,002
IssuesEvent
2018-05-03 18:28:13
PhonologicalCorpusTools/SLP-Annotator
https://api.github.com/repos/PhonologicalCorpusTools/SLP-Annotator
opened
adding options to dropdown menus
easytofix enhancement priority
Add the options 'e' and 'f' (in addition to 'E' and 'F') to slot numbers: 4, 5, 17. 18, 19, 22, 23, 24, 27, 28, 29, 32, 33, 34 Also, add 'shortcut' keyboard options for these -- hitting the 'w' key should auto-fill 'e' and hitting the 's' key should auto-fill 'f.'
1.0
adding options to dropdown menus - Add the options 'e' and 'f' (in addition to 'E' and 'F') to slot numbers: 4, 5, 17. 18, 19, 22, 23, 24, 27, 28, 29, 32, 33, 34 Also, add 'shortcut' keyboard options for these -- hitting the 'w' key should auto-fill 'e' and hitting the 's' key should auto-fill 'f.'
non_process
adding options to dropdown menus add the options e and f in addition to e and f to slot numbers also add shortcut keyboard options for these hitting the w key should auto fill e and hitting the s key should auto fill f
0
14,574
17,702,938,452
IssuesEvent
2021-08-25 01:55:48
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
Change term - associatedMedia
Term - change Class - Occurrence non-normative Process - complete
## Change term * Submitter: John Wieczorek @tucotuco * Justification (why is this change necessary?): Clarity * Proponents (who needs this change): Everyone Current Term definition: https://dwc.tdwg.org/list/#dwc_associatedMedia Proposed new attributes of the term: * Term name (in lowerCamelCase): associatedMedia * Organized in Class (e.g. Location, Taxon): Occurrence * Definition of the term: A list (concatenated and separated) of identifiers (publication, global unique identifier, URI) of media associated with the Occurrence. * Usage comments (recommendations regarding content, etc.): **This term can be used to provide a list of associations to other media resources. Note that the Darwin Core extension "Audubon Media Description", based on the Audubon Core standard, is a rich alternative means of capturing metadata about associated media. Recommended best practice is to separate the values in a list with space vertical bar space ( | ).** * Examples: `https://arctos.database.museum/media/10520962 | https://arctos.database.museum/media/10520964` * Refines (identifier of the broader term this term refines, if applicable): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/associatedMedia-2020-08-12 * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/MultimediaObjects Proposed changes to associatedOccurrences (Issue #324) suggest that a clarification should also be made in the associatedMedia usage notes. Specifically, the convention on list item separation and the reference to the [Audubon Media Description extension](https://tools.gbif.org/dwca-validator/extension.do?id=http://rs.tdwg.org/ac/terms/Multimedia) as an alternative means of capturing these data are recommended.
1.0
Change term - associatedMedia - ## Change term * Submitter: John Wieczorek @tucotuco * Justification (why is this change necessary?): Clarity * Proponents (who needs this change): Everyone Current Term definition: https://dwc.tdwg.org/list/#dwc_associatedMedia Proposed new attributes of the term: * Term name (in lowerCamelCase): associatedMedia * Organized in Class (e.g. Location, Taxon): Occurrence * Definition of the term: A list (concatenated and separated) of identifiers (publication, global unique identifier, URI) of media associated with the Occurrence. * Usage comments (recommendations regarding content, etc.): **This term can be used to provide a list of associations to other media resources. Note that the Darwin Core extension "Audubon Media Description", based on the Audubon Core standard, is a rich alternative means of capturing metadata about associated media. Recommended best practice is to separate the values in a list with space vertical bar space ( | ).** * Examples: `https://arctos.database.museum/media/10520962 | https://arctos.database.museum/media/10520964` * Refines (identifier of the broader term this term refines, if applicable): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/associatedMedia-2020-08-12 * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/MultimediaObjects Proposed changes to associatedOccurrences (Issue #324) suggest that a clarification should also be made in the associatedMedia usage notes. Specifically, the convention on list item separation and the reference to the [Audubon Media Description extension](https://tools.gbif.org/dwca-validator/extension.do?id=http://rs.tdwg.org/ac/terms/Multimedia) as an alternative means of capturing these data are recommended.
process
change term associatedmedia change term submitter john wieczorek tucotuco justification why is this change necessary clarity proponents who needs this change everyone current term definition proposed new attributes of the term term name in lowercamelcase associatedmedia organized in class e g location taxon occurrence definition of the term a list concatenated and separated of identifiers publication global unique identifier uri of media associated with the occurrence usage comments recommendations regarding content etc this term can be used to provide a list of associations to other media resources note that the darwin core extension audubon media description based on the audubon core standard is a rich alternative means of capturing metadata about associated media recommended best practice is to separate the values in a list with space vertical bar space examples refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable datasets dataset units unit multimediaobjects proposed changes to associatedoccurrences issue suggest that a clarification should also be made in the associatedmedia usage notes specifically the convention on list item separation and the reference to the as an alternative means of capturing these data are recommended
1
133,603
12,545,703,593
IssuesEvent
2020-06-05 19:24:37
MelissaAU/Equipo2_JLPPI2020
https://api.github.com/repos/MelissaAU/Equipo2_JLPPI2020
reopened
Título del proyecto
documentation
Definición del nombre del proyecto de acuerdo a la necesidad encontrada y la solución planteada
1.0
Título del proyecto - Definición del nombre del proyecto de acuerdo a la necesidad encontrada y la solución planteada
non_process
título del proyecto definición del nombre del proyecto de acuerdo a la necesidad encontrada y la solución planteada
0
195,908
15,560,239,201
IssuesEvent
2021-03-16 12:30:18
SamuelLozanoJuarez/go-bees
https://api.github.com/repos/SamuelLozanoJuarez/go-bees
closed
Rematar memoria
documentation
- [x] Enlace GooglePlay. - [x] Imagen CI - [x] Tabla de estadísticas - [x] Firmar electrónicamente - [x] Imagen página web o google play
1.0
Rematar memoria - - [x] Enlace GooglePlay. - [x] Imagen CI - [x] Tabla de estadísticas - [x] Firmar electrónicamente - [x] Imagen página web o google play
non_process
rematar memoria enlace googleplay imagen ci tabla de estadísticas firmar electrónicamente imagen página web o google play
0