Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
20,733
| 27,431,529,675
|
IssuesEvent
|
2023-03-02 02:00:09
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Thu, 2 Mar 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Applying Plain Transformers to Real-World Point Clouds
- **Authors:** Lanxiao Li, Michael Heizmann
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.00086
- **Pdf link:** https://arxiv.org/pdf/2303.00086
- **Abstract**
Due to the lack of inductive bias, transformer-based models usually require a large amount of training data. The problem is especially concerning in 3D vision, as 3D data are harder to acquire and annotate. To overcome this problem, previous works modify the architecture of transformers to incorporate inductive biases by applying, e.g., local attention and down-sampling. Although they have achieved promising results, earlier works on transformers for point clouds have two issues. First, the power of plain transformers is still under-explored. Second, they focus on simple and small point clouds instead of complex real-world ones. This work revisits the plain transformers in real-world point cloud understanding. We first take a closer look at some fundamental components of plain transformers, e.g., patchifier and positional embedding, for both efficiency and performance. To close the performance gap due to the lack of inductive bias and annotated data, we investigate self-supervised pre-training with masked autoencoder (MAE). Specifically, we propose drop patch, which prevents information leakage and significantly improves the effectiveness of MAE. Our models achieve SOTA results in semantic segmentation on the S3DIS dataset and object detection on the ScanNet dataset with lower computational costs. Our work provides a new baseline for future research on transformers for point clouds.
## Keyword: event camera
### Event Fusion Photometric Stereo Network
- **Authors:** Wonjeong Ryoo, Giljoo Nam, Jae-Sang Hyun, Sangpil Kim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.00308
- **Pdf link:** https://arxiv.org/pdf/2303.00308
- **Abstract**
We introduce a novel method to estimate surface normal of an object in an ambient light environment using RGB and event cameras. Modern photometric stereo methods rely on RGB cameras in a darkroom to avoid ambient illumination. To alleviate the limitations of using an RGB camera in a darkroom setting, we utilize an event camera with high dynamic range and low latency by capturing essential light information. This is the first study to use event cameras for photometric stereo in continuous light sources and ambient light environments. Additionally, we curate a new photometric stereo dataset captured by RGB and event cameras under various ambient lights. Our proposed framework, Event Fusion Photometric Stereo Network (EFPS-Net), estimates surface normals using RGB frames and event signals. EFPS-Net outperforms state-of-the-art methods on a real-world dataset with ambient lights, demonstrating the effectiveness of incorporating additional modalities to alleviate limitations caused by ambient illumination.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### TAU: A Framework for Video-Based Traffic Analytics Leveraging Artificial Intelligence and Unmanned Aerial Systems
- **Authors:** Bilel Benjdira, Anis Koubaa, Ahmad Taher Azar, Zahid Khan, Adel Ammar, Wadii Boulila
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.00337
- **Pdf link:** https://arxiv.org/pdf/2303.00337
- **Abstract**
Smart traffic engineering and intelligent transportation services are in increasing demand from governmental authorities to optimize traffic performance and thus reduce energy costs, increase the drivers' safety and comfort, ensure traffic laws enforcement, and detect traffic violations. In this paper, we address this challenge, and we leverage the use of Artificial Intelligence (AI) and Unmanned Aerial Vehicles (UAVs) to develop an AI-integrated video analytics framework, called TAU (Traffic Analysis from UAVs), for automated traffic analytics and understanding. Unlike previous works on traffic video analytics, we propose an automated object detection and tracking pipeline from video processing to advanced traffic understanding using high-resolution UAV images. TAU combines six main contributions. First, it proposes a pre-processing algorithm to adapt the high-resolution UAV image as input to the object detector without lowering the resolution. This ensures an excellent detection accuracy from high-quality features, particularly the small size of detected objects from UAV images. Second, it introduces an algorithm for recalibrating the vehicle coordinates to ensure that vehicles are uniquely identified and tracked across the multiple crops of the same frame. Third, it presents a speed calculation algorithm based on accumulating information from successive frames. Fourth, TAU counts the number of vehicles per traffic zone based on the Ray Tracing algorithm. Fifth, TAU has a fully independent algorithm for crossroad arbitration based on the data gathered from the different zones surrounding it. Sixth, TAU introduces a set of algorithms for extracting twenty-four types of insights from the raw data collected. The code is shared here: https://github.com/bilel-bj/TAU. Video demonstrations are provided here: https://youtu.be/wXJV0H7LviU and here: https://youtu.be/kGv0gmtVEbI.
### The style transformer with common knowledge optimization for image-text retrieval
- **Authors:** Wenrui Li, Zhengyu Ma, Xiaopeng Fan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2303.00448
- **Pdf link:** https://arxiv.org/pdf/2303.00448
- **Abstract**
Image-text retrieval which associates different modalities has drawn broad attention due to its excellent research value and broad real-world application. While the algorithms keep updated, most of them haven't taken the high-level semantic relationships ("style embedding") and common knowledge from multi-modalities into full consideration. To this end, we propose a novel style transformer network with common knowledge optimization (CKSTN) for image-text retrieval. The main module is the common knowledge adaptor (CKA) with both the style embedding extractor (SEE) and the common knowledge optimization (CKO) modules. Specifically, the SEE is designed to effectively extract high-level features. The CKO module is introduced to dynamically capture the latent concepts of common knowledge from different modalities. Together, they could assist in the formation of item representations in lightweight transformers. Besides, to get generalized temporal common knowledge, we propose a sequential update strategy to effectively integrate the features of different layers in SEE with previous common feature units. CKSTN outperforms the results of state-of-the-art methods in image-text retrieval on MSCOCO and Flickr30K datasets. Moreover, CKSTN is more convenient and practical for the application of real scenes, due to the better performance and lower parameters.
### Valid Information Guidance Network for Compressed Video Quality Enhancement
- **Authors:** Xuan Sun, Ziyue Zhang, Guannan Chen, Dan Zhu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2303.00520
- **Pdf link:** https://arxiv.org/pdf/2303.00520
- **Abstract**
In recent years deep learning methods have shown great superiority in compressed video quality enhancement tasks. Existing methods generally take the raw video as the ground truth and extract practical information from consecutive frames containing various artifacts. However, they do not fully exploit the valid information of compressed and raw videos to guide the quality enhancement for compressed videos. In this paper, we propose a unique Valid Information Guidance scheme (VIG) to enhance the quality of compressed videos by mining valid information from both compressed videos and raw videos. Specifically, we propose an efficient framework, Compressed Redundancy Filtering (CRF) network, to balance speed and enhancement. After removing the redundancy by filtering the information, CRF can use the valid information of the compressed video to reconstruct the texture. Furthermore, we propose a progressive Truth Guidance Distillation (TGD) strategy, which does not need to design additional teacher models and distillation loss functions. By only using the ground truth as input to guide the model to aggregate the correct spatio-temporal correspondence across the raw frames, TGD can significantly improve the enhancement effect without increasing the extra training cost. Extensive experiments show that our method achieves the state-of-the-art performance of compressed video quality enhancement in terms of accuracy and efficiency.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Thu, 2 Mar 23 - ## Keyword: events
### Applying Plain Transformers to Real-World Point Clouds
- **Authors:** Lanxiao Li, Michael Heizmann
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.00086
- **Pdf link:** https://arxiv.org/pdf/2303.00086
- **Abstract**
Due to the lack of inductive bias, transformer-based models usually require a large amount of training data. The problem is especially concerning in 3D vision, as 3D data are harder to acquire and annotate. To overcome this problem, previous works modify the architecture of transformers to incorporate inductive biases by applying, e.g., local attention and down-sampling. Although they have achieved promising results, earlier works on transformers for point clouds have two issues. First, the power of plain transformers is still under-explored. Second, they focus on simple and small point clouds instead of complex real-world ones. This work revisits the plain transformers in real-world point cloud understanding. We first take a closer look at some fundamental components of plain transformers, e.g., patchifier and positional embedding, for both efficiency and performance. To close the performance gap due to the lack of inductive bias and annotated data, we investigate self-supervised pre-training with masked autoencoder (MAE). Specifically, we propose drop patch, which prevents information leakage and significantly improves the effectiveness of MAE. Our models achieve SOTA results in semantic segmentation on the S3DIS dataset and object detection on the ScanNet dataset with lower computational costs. Our work provides a new baseline for future research on transformers for point clouds.
## Keyword: event camera
### Event Fusion Photometric Stereo Network
- **Authors:** Wonjeong Ryoo, Giljoo Nam, Jae-Sang Hyun, Sangpil Kim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.00308
- **Pdf link:** https://arxiv.org/pdf/2303.00308
- **Abstract**
We introduce a novel method to estimate surface normal of an object in an ambient light environment using RGB and event cameras. Modern photometric stereo methods rely on RGB cameras in a darkroom to avoid ambient illumination. To alleviate the limitations of using an RGB camera in a darkroom setting, we utilize an event camera with high dynamic range and low latency by capturing essential light information. This is the first study to use event cameras for photometric stereo in continuous light sources and ambient light environments. Additionally, we curate a new photometric stereo dataset captured by RGB and event cameras under various ambient lights. Our proposed framework, Event Fusion Photometric Stereo Network (EFPS-Net), estimates surface normals using RGB frames and event signals. EFPS-Net outperforms state-of-the-art methods on a real-world dataset with ambient lights, demonstrating the effectiveness of incorporating additional modalities to alleviate limitations caused by ambient illumination.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
There is no result
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### TAU: A Framework for Video-Based Traffic Analytics Leveraging Artificial Intelligence and Unmanned Aerial Systems
- **Authors:** Bilel Benjdira, Anis Koubaa, Ahmad Taher Azar, Zahid Khan, Adel Ammar, Wadii Boulila
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2303.00337
- **Pdf link:** https://arxiv.org/pdf/2303.00337
- **Abstract**
Smart traffic engineering and intelligent transportation services are in increasing demand from governmental authorities to optimize traffic performance and thus reduce energy costs, increase the drivers' safety and comfort, ensure traffic laws enforcement, and detect traffic violations. In this paper, we address this challenge, and we leverage the use of Artificial Intelligence (AI) and Unmanned Aerial Vehicles (UAVs) to develop an AI-integrated video analytics framework, called TAU (Traffic Analysis from UAVs), for automated traffic analytics and understanding. Unlike previous works on traffic video analytics, we propose an automated object detection and tracking pipeline from video processing to advanced traffic understanding using high-resolution UAV images. TAU combines six main contributions. First, it proposes a pre-processing algorithm to adapt the high-resolution UAV image as input to the object detector without lowering the resolution. This ensures an excellent detection accuracy from high-quality features, particularly the small size of detected objects from UAV images. Second, it introduces an algorithm for recalibrating the vehicle coordinates to ensure that vehicles are uniquely identified and tracked across the multiple crops of the same frame. Third, it presents a speed calculation algorithm based on accumulating information from successive frames. Fourth, TAU counts the number of vehicles per traffic zone based on the Ray Tracing algorithm. Fifth, TAU has a fully independent algorithm for crossroad arbitration based on the data gathered from the different zones surrounding it. Sixth, TAU introduces a set of algorithms for extracting twenty-four types of insights from the raw data collected. The code is shared here: https://github.com/bilel-bj/TAU. Video demonstrations are provided here: https://youtu.be/wXJV0H7LviU and here: https://youtu.be/kGv0gmtVEbI.
### The style transformer with common knowledge optimization for image-text retrieval
- **Authors:** Wenrui Li, Zhengyu Ma, Xiaopeng Fan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2303.00448
- **Pdf link:** https://arxiv.org/pdf/2303.00448
- **Abstract**
Image-text retrieval which associates different modalities has drawn broad attention due to its excellent research value and broad real-world application. While the algorithms keep updated, most of them haven't taken the high-level semantic relationships ("style embedding") and common knowledge from multi-modalities into full consideration. To this end, we propose a novel style transformer network with common knowledge optimization (CKSTN) for image-text retrieval. The main module is the common knowledge adaptor (CKA) with both the style embedding extractor (SEE) and the common knowledge optimization (CKO) modules. Specifically, the SEE is designed to effectively extract high-level features. The CKO module is introduced to dynamically capture the latent concepts of common knowledge from different modalities. Together, they could assist in the formation of item representations in lightweight transformers. Besides, to get generalized temporal common knowledge, we propose a sequential update strategy to effectively integrate the features of different layers in SEE with previous common feature units. CKSTN outperforms the results of state-of-the-art methods in image-text retrieval on MSCOCO and Flickr30K datasets. Moreover, CKSTN is more convenient and practical for the application of real scenes, due to the better performance and lower parameters.
### Valid Information Guidance Network for Compressed Video Quality Enhancement
- **Authors:** Xuan Sun, Ziyue Zhang, Guannan Chen, Dan Zhu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2303.00520
- **Pdf link:** https://arxiv.org/pdf/2303.00520
- **Abstract**
In recent years deep learning methods have shown great superiority in compressed video quality enhancement tasks. Existing methods generally take the raw video as the ground truth and extract practical information from consecutive frames containing various artifacts. However, they do not fully exploit the valid information of compressed and raw videos to guide the quality enhancement for compressed videos. In this paper, we propose a unique Valid Information Guidance scheme (VIG) to enhance the quality of compressed videos by mining valid information from both compressed videos and raw videos. Specifically, we propose an efficient framework, Compressed Redundancy Filtering (CRF) network, to balance speed and enhancement. After removing the redundancy by filtering the information, CRF can use the valid information of the compressed video to reconstruct the texture. Furthermore, we propose a progressive Truth Guidance Distillation (TGD) strategy, which does not need to design additional teacher models and distillation loss functions. By only using the ground truth as input to guide the model to aggregate the correct spatio-temporal correspondence across the raw frames, TGD can significantly improve the enhancement effect without increasing the extra training cost. Extensive experiments show that our method achieves the state-of-the-art performance of compressed video quality enhancement in terms of accuracy and efficiency.
## Keyword: raw image
There is no result
|
process
|
new submissions for thu mar keyword events applying plain transformers to real world point clouds authors lanxiao li michael heizmann subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract due to the lack of inductive bias transformer based models usually require a large amount of training data the problem is especially concerning in vision as data are harder to acquire and annotate to overcome this problem previous works modify the architecture of transformers to incorporate inductive biases by applying e g local attention and down sampling although they have achieved promising results earlier works on transformers for point clouds have two issues first the power of plain transformers is still under explored second they focus on simple and small point clouds instead of complex real world ones this work revisits the plain transformers in real world point cloud understanding we first take a closer look at some fundamental components of plain transformers e g patchifier and positional embedding for both efficiency and performance to close the performance gap due to the lack of inductive bias and annotated data we investigate self supervised pre training with masked autoencoder mae specifically we propose drop patch which prevents information leakage and significantly improves the effectiveness of mae our models achieve sota results in semantic segmentation on the dataset and object detection on the scannet dataset with lower computational costs our work provides a new baseline for future research on transformers for point clouds keyword event camera event fusion photometric stereo network authors wonjeong ryoo giljoo nam jae sang hyun sangpil kim subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we introduce a novel method to estimate surface normal of an object in an ambient light environment using rgb and event cameras modern photometric stereo methods rely on rgb cameras in a darkroom to avoid ambient illumination to alleviate the limitations of using an rgb camera in a darkroom setting we utilize an event camera with high dynamic range and low latency by capturing essential light information this is the first study to use event cameras for photometric stereo in continuous light sources and ambient light environments additionally we curate a new photometric stereo dataset captured by rgb and event cameras under various ambient lights our proposed framework event fusion photometric stereo network efps net estimates surface normals using rgb frames and event signals efps net outperforms state of the art methods on a real world dataset with ambient lights demonstrating the effectiveness of incorporating additional modalities to alleviate limitations caused by ambient illumination keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw tau a framework for video based traffic analytics leveraging artificial intelligence and unmanned aerial systems authors bilel benjdira anis koubaa ahmad taher azar zahid khan adel ammar wadii boulila subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract smart traffic engineering and intelligent transportation services are in increasing demand from governmental authorities to optimize traffic performance and thus reduce energy costs increase the drivers safety and comfort ensure traffic laws enforcement and detect traffic violations in this paper we address this challenge and we leverage the use of artificial intelligence ai and unmanned aerial vehicles uavs to develop an ai integrated video analytics framework called tau traffic analysis from uavs for automated traffic analytics and understanding unlike previous works on traffic video analytics we propose an automated object detection and tracking pipeline from video processing to advanced traffic understanding using high resolution uav images tau combines six main contributions first it proposes a pre processing algorithm to adapt the high resolution uav image as input to the object detector without lowering the resolution this ensures an excellent detection accuracy from high quality features particularly the small size of detected objects from uav images second it introduces an algorithm for recalibrating the vehicle coordinates to ensure that vehicles are uniquely identified and tracked across the multiple crops of the same frame third it presents a speed calculation algorithm based on accumulating information from successive frames fourth tau counts the number of vehicles per traffic zone based on the ray tracing algorithm fifth tau has a fully independent algorithm for crossroad arbitration based on the data gathered from the different zones surrounding it sixth tau introduces a set of algorithms for extracting twenty four types of insights from the raw data collected the code is shared here video demonstrations are provided here and here the style transformer with common knowledge optimization for image text retrieval authors wenrui li zhengyu ma xiaopeng fan subjects computer vision and pattern recognition cs cv multimedia cs mm arxiv link pdf link abstract image text retrieval which associates different modalities has drawn broad attention due to its excellent research value and broad real world application while the algorithms keep updated most of them haven t taken the high level semantic relationships style embedding and common knowledge from multi modalities into full consideration to this end we propose a novel style transformer network with common knowledge optimization ckstn for image text retrieval the main module is the common knowledge adaptor cka with both the style embedding extractor see and the common knowledge optimization cko modules specifically the see is designed to effectively extract high level features the cko module is introduced to dynamically capture the latent concepts of common knowledge from different modalities together they could assist in the formation of item representations in lightweight transformers besides to get generalized temporal common knowledge we propose a sequential update strategy to effectively integrate the features of different layers in see with previous common feature units ckstn outperforms the results of state of the art methods in image text retrieval on mscoco and datasets moreover ckstn is more convenient and practical for the application of real scenes due to the better performance and lower parameters valid information guidance network for compressed video quality enhancement authors xuan sun ziyue zhang guannan chen dan zhu subjects computer vision and pattern recognition cs cv multimedia cs mm arxiv link pdf link abstract in recent years deep learning methods have shown great superiority in compressed video quality enhancement tasks existing methods generally take the raw video as the ground truth and extract practical information from consecutive frames containing various artifacts however they do not fully exploit the valid information of compressed and raw videos to guide the quality enhancement for compressed videos in this paper we propose a unique valid information guidance scheme vig to enhance the quality of compressed videos by mining valid information from both compressed videos and raw videos specifically we propose an efficient framework compressed redundancy filtering crf network to balance speed and enhancement after removing the redundancy by filtering the information crf can use the valid information of the compressed video to reconstruct the texture furthermore we propose a progressive truth guidance distillation tgd strategy which does not need to design additional teacher models and distillation loss functions by only using the ground truth as input to guide the model to aggregate the correct spatio temporal correspondence across the raw frames tgd can significantly improve the enhancement effect without increasing the extra training cost extensive experiments show that our method achieves the state of the art performance of compressed video quality enhancement in terms of accuracy and efficiency keyword raw image there is no result
| 1
|
8,022
| 11,207,511,168
|
IssuesEvent
|
2020-01-06 03:59:00
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Changed filepath
|
Pri2 automation/svc cxp process-automation/subsvc product-question triaged
|
after following these instructions, I cannot get past step 3 of installation because onboarding.py does not exist after installing the Log Analytics agent
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e38be5b8-d76d-a4f1-c014-7bf9248be2de
* Version Independent ID: 976e5e90-b28c-d7ba-0495-69d92e62ea46
* Content: [Azure Automation Linux Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-linux-hrw-install#feedback)
* Content Source: [articles/automation/automation-linux-hrw-install.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-linux-hrw-install.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Changed filepath - after following these instructions, I cannot get past step 3 of installation because onboarding.py does not exist after installing the Log Analytics agent
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e38be5b8-d76d-a4f1-c014-7bf9248be2de
* Version Independent ID: 976e5e90-b28c-d7ba-0495-69d92e62ea46
* Content: [Azure Automation Linux Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-linux-hrw-install#feedback)
* Content Source: [articles/automation/automation-linux-hrw-install.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-linux-hrw-install.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
changed filepath after following these instructions i cannot get past step of installation because onboarding py does not exist after installing the log analytics agent document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
59,456
| 11,965,700,760
|
IssuesEvent
|
2020-04-06 00:29:48
|
tonsky/FiraCode
|
https://api.github.com/repos/tonsky/FiraCode
|
closed
|
Unicode math symbols
|
Unicode
|
I use FiraCode with Coq and to me the ligatures are superior to using just unicode (for example a unicode implication arrow is tiny, whereas `->` with ligatures takes up exactly the amount of space I like). However some unicode math symbols (particularly those reachable as LaTeX commands) are quite poor. For example calligraphic letters (`\mathcal{N}`) or symbols (like `\in`). Also white spaces are broken (no longer the same size) after those symbols. Perhaps this is a fallback font? If I can configure a different fallback font perhaps I can find a good compromise.

It would be awesome if a bunch of mathematical symbols could be added to FiraCode! (I can imagine symbols like `\in` being very useful in other programming languages as infix operator).
|
1.0
|
Unicode math symbols - I use FiraCode with Coq and to me the ligatures are superior to using just unicode (for example a unicode implication arrow is tiny, whereas `->` with ligatures takes up exactly the amount of space I like). However some unicode math symbols (particularly those reachable as LaTeX commands) are quite poor. For example calligraphic letters (`\mathcal{N}`) or symbols (like `\in`). Also white spaces are broken (no longer the same size) after those symbols. Perhaps this is a fallback font? If I can configure a different fallback font perhaps I can find a good compromise.

It would be awesome if a bunch of mathematical symbols could be added to FiraCode! (I can imagine symbols like `\in` being very useful in other programming languages as infix operator).
|
non_process
|
unicode math symbols i use firacode with coq and to me the ligatures are superior to using just unicode for example a unicode implication arrow is tiny whereas with ligatures takes up exactly the amount of space i like however some unicode math symbols particularly those reachable as latex commands are quite poor for example calligraphic letters mathcal n or symbols like in also white spaces are broken no longer the same size after those symbols perhaps this is a fallback font if i can configure a different fallback font perhaps i can find a good compromise it would be awesome if a bunch of mathematical symbols could be added to firacode i can imagine symbols like in being very useful in other programming languages as infix operator
| 0
|
420,264
| 28,242,933,689
|
IssuesEvent
|
2023-04-06 08:36:57
|
camunda/camunda-bpm-platform
|
https://api.github.com/repos/camunda/camunda-bpm-platform
|
reopened
|
Fix privacy link in website footer
|
type:task scope:documentation version:7.20.0 version:7.18.7 version:7.17.12 version:7.19.1
|
### Acceptance Criteria (Required on creation)
* The link to our privacy statement is changed to https://camunda.com/legal/privacy/
### Hints
* Needs to be adjusted in the theme and the docs-manual accordingly
### Links
<!--
- https://jira.camunda.com/browse/CAM-12398
-->
### Breakdown
- [x] https://github.com/camunda/camunda-docs-theme/pull/40
- [ ] https://github.com/camunda/camunda-docs-manual/pull/
### Dev2QA handover
- [ ] Does this ticket need a QA test and the testing goals are not clear from the description? Add a [Dev2QA handover comment](https://confluence.camunda.com/display/AP/Handover+Dev+-%3E+Testing)
|
1.0
|
Fix privacy link in website footer - ### Acceptance Criteria (Required on creation)
* The link to our privacy statement is changed to https://camunda.com/legal/privacy/
### Hints
* Needs to be adjusted in the theme and the docs-manual accordingly
### Links
<!--
- https://jira.camunda.com/browse/CAM-12398
-->
### Breakdown
- [x] https://github.com/camunda/camunda-docs-theme/pull/40
- [ ] https://github.com/camunda/camunda-docs-manual/pull/
### Dev2QA handover
- [ ] Does this ticket need a QA test and the testing goals are not clear from the description? Add a [Dev2QA handover comment](https://confluence.camunda.com/display/AP/Handover+Dev+-%3E+Testing)
|
non_process
|
fix privacy link in website footer acceptance criteria required on creation the link to our privacy statement is changed to hints needs to be adjusted in the theme and the docs manual accordingly links breakdown handover does this ticket need a qa test and the testing goals are not clear from the description add a
| 0
|
119,077
| 25,464,553,049
|
IssuesEvent
|
2022-11-25 01:47:47
|
objectos/objectos
|
https://api.github.com/repos/objectos/objectos
|
closed
|
Field declarations TC06: array initializer
|
a:objectos-code
|
Objectos Code:
```java
field(t(_int(), dim()), id("a"), a());
field(t(_int(), dim()), id("b"), a(i(0)));
field(t(_int(), dim()), id("c"), a(i(0), i(1)));
```
Should generate:
```java
int[] a = {};
int[] b = {0};
int[] c = {0, 1};
```
|
1.0
|
Field declarations TC06: array initializer - Objectos Code:
```java
field(t(_int(), dim()), id("a"), a());
field(t(_int(), dim()), id("b"), a(i(0)));
field(t(_int(), dim()), id("c"), a(i(0), i(1)));
```
Should generate:
```java
int[] a = {};
int[] b = {0};
int[] c = {0, 1};
```
|
non_process
|
field declarations array initializer objectos code java field t int dim id a a field t int dim id b a i field t int dim id c a i i should generate java int a int b int c
| 0
|
15,991
| 10,470,458,897
|
IssuesEvent
|
2019-09-23 03:47:55
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
closed
|
Converting a just created Reusable Block to Regular Blocks inserts a 'Reusable Template' block into a post
|
[Component] Reusable Blocks [Type] Bug
|
**Describe the bug**
When converting a just created Reusable Block (that has multiple blocks) into Regular Blocks, a wrapping 'Reusable Template' block is inserted into the post. This wrapping block seems to have a template lock and is hard to delete (no block settings menu and is not properly selectable).
**To reproduce**
Steps to reproduce the behavior:
1. Create two paragraph blocks with any text content (paragraphs are easiest, but any block type should have this issue)
2. Select the two paragraphs and create a reusable block from them
3. Convert this reusable block to regular blocks
4. Observe that the blocks are wrapped in a 'Reusable Template' block.
**Expected behavior**
The blocks are returned back to their original state.
**Screenshots**
<img width="277" alt="Screen Shot 2019-08-24 at 4 30 02 pm" src="https://user-images.githubusercontent.com/677833/63634694-9cce5600-c68c-11e9-824b-f985905e2e15.png">
**Additional context**
- Inserting other instances of the reusable block and converting to regular blocks works fine.
- Tested in Chrome
- Discovered in `master` at `e8bc32112d1f4606009da1d7e46deeb7ab873378`
|
True
|
Converting a just created Reusable Block to Regular Blocks inserts a 'Reusable Template' block into a post - **Describe the bug**
When converting a just created Reusable Block (that has multiple blocks) into Regular Blocks, a wrapping 'Reusable Template' block is inserted into the post. This wrapping block seems to have a template lock and is hard to delete (no block settings menu and is not properly selectable).
**To reproduce**
Steps to reproduce the behavior:
1. Create two paragraph blocks with any text content (paragraphs are easiest, but any block type should have this issue)
2. Select the two paragraphs and create a reusable block from them
3. Convert this reusable block to regular blocks
4. Observe that the blocks are wrapped in a 'Reusable Template' block.
**Expected behavior**
The blocks are returned back to their original state.
**Screenshots**
<img width="277" alt="Screen Shot 2019-08-24 at 4 30 02 pm" src="https://user-images.githubusercontent.com/677833/63634694-9cce5600-c68c-11e9-824b-f985905e2e15.png">
**Additional context**
- Inserting other instances of the reusable block and converting to regular blocks works fine.
- Tested in Chrome
- Discovered in `master` at `e8bc32112d1f4606009da1d7e46deeb7ab873378`
|
non_process
|
converting a just created reusable block to regular blocks inserts a reusable template block into a post describe the bug when converting a just created reusable block that has multiple blocks into regular blocks a wrapping reusable template block is inserted into the post this wrapping block seems to have a template lock and is hard to delete no block settings menu and is not properly selectable to reproduce steps to reproduce the behavior create two paragraph blocks with any text content paragraphs are easiest but any block type should have this issue select the two paragraphs and create a reusable block from them convert this reusable block to regular blocks observe that the blocks are wrapped in a reusable template block expected behavior the blocks are returned back to their original state screenshots img width alt screen shot at pm src additional context inserting other instances of the reusable block and converting to regular blocks works fine tested in chrome discovered in master at
| 0
|
707,562
| 24,309,759,884
|
IssuesEvent
|
2022-09-29 20:57:16
|
HackerN64/HackerSM64
|
https://api.github.com/repos/HackerN64/HackerSM64
|
closed
|
Memory leak when switching between levels using debug level select
|
bug high priority
|
While i was testing levels, the game crashed when i entered a level like a 10th time
This only happens when using debug level select as warping between areas is fine
My guess is that because how level selects loads levels, it doesn't clear collision pools properly? See `MEM` value below
https://user-images.githubusercontent.com/38191089/192422074-cfa00412-f11b-40fe-9a8d-cedfb3fda2d2.mp4
|
1.0
|
Memory leak when switching between levels using debug level select - While i was testing levels, the game crashed when i entered a level like a 10th time
This only happens when using debug level select as warping between areas is fine
My guess is that because how level selects loads levels, it doesn't clear collision pools properly? See `MEM` value below
https://user-images.githubusercontent.com/38191089/192422074-cfa00412-f11b-40fe-9a8d-cedfb3fda2d2.mp4
|
non_process
|
memory leak when switching between levels using debug level select while i was testing levels the game crashed when i entered a level like a time this only happens when using debug level select as warping between areas is fine my guess is that because how level selects loads levels it doesn t clear collision pools properly see mem value below
| 0
|
14,510
| 17,605,479,186
|
IssuesEvent
|
2021-08-17 16:30:12
|
googleapis/python-bigtable
|
https://api.github.com/repos/googleapis/python-bigtable
|
closed
|
Add unit tests against pre-release versions of dependencies
|
api: bigtable type: process
|
In particular, to ensure that we aren't broken by the `2.0.0b1` versions of `google-api-core`, `google-cloud-core`, and `google-auth`.
|
1.0
|
Add unit tests against pre-release versions of dependencies - In particular, to ensure that we aren't broken by the `2.0.0b1` versions of `google-api-core`, `google-cloud-core`, and `google-auth`.
|
process
|
add unit tests against pre release versions of dependencies in particular to ensure that we aren t broken by the versions of google api core google cloud core and google auth
| 1
|
6,884
| 10,025,251,583
|
IssuesEvent
|
2019-07-17 01:22:41
|
tokio-rs/tokio
|
https://api.github.com/repos/tokio-rs/tokio
|
opened
|
Pull tokio-process in tree
|
tokio-process
|
As per my conversation with @carllerche we'll be pulling `tokio-process` in tree!
This will benefit by including `tokio-process` into the `tokio` ecosystem proper, and will allow for better reviews and contributions (plus lower the bus factor given that I'm the only primary maintainer at this time).
Below is a list of tasks which should be accomplished as part of this effort. They will likely get split up in several actual PRs after the initial merge to make things more reviewable.
- [ ] flatten past `tokio-process` history (preserve authors but remove merge commits, prefix commit names with `process:`)
- [ ] rewrite any references to issues/PRs in the old repo
- [ ] update LICENSE to match the Tokio project
- [ ] update Cargo.toml references
- [ ] update CHANGELOG
- [ ] update README
- [ ] run cargo fmt
- [ ] update to 2018 edition
- [ ] remove unneeded files (.travis.yml/appveyor.yml etc)
- [ ] move crate to its own directory
- [ ] enable CI for tokio-process
- [ ] perform a one-time merge commit (not squash) to preserve history
- [ ] update to `std::future`
- [ ] bump version and set `publish = false`
- [ ] add a link in the old repo to point to the new repo
- [ ] transfer any open issues to the new repo
- [ ] archive the old repo
cc @alexcrichton as an FYI
|
1.0
|
Pull tokio-process in tree - As per my conversation with @carllerche we'll be pulling `tokio-process` in tree!
This will benefit by including `tokio-process` into the `tokio` ecosystem proper, and will allow for better reviews and contributions (plus lower the bus factor given that I'm the only primary maintainer at this time).
Below is a list of tasks which should be accomplished as part of this effort. They will likely get split up in several actual PRs after the initial merge to make things more reviewable.
- [ ] flatten past `tokio-process` history (preserve authors but remove merge commits, prefix commit names with `process:`)
- [ ] rewrite any references to issues/PRs in the old repo
- [ ] update LICENSE to match the Tokio project
- [ ] update Cargo.toml references
- [ ] update CHANGELOG
- [ ] update README
- [ ] run cargo fmt
- [ ] update to 2018 edition
- [ ] remove unneeded files (.travis.yml/appveyor.yml etc)
- [ ] move crate to its own directory
- [ ] enable CI for tokio-process
- [ ] perform a one-time merge commit (not squash) to preserve history
- [ ] update to `std::future`
- [ ] bump version and set `publish = false`
- [ ] add a link in the old repo to point to the new repo
- [ ] transfer any open issues to the new repo
- [ ] archive the old repo
cc @alexcrichton as an FYI
|
process
|
pull tokio process in tree as per my conversation with carllerche we ll be pulling tokio process in tree this will benefit by including tokio process into the tokio ecosystem proper and will allow for better reviews and contributions plus lower the bus factor given that i m the only primary maintainer at this time below is a list of tasks which should be accomplished as part of this effort they will likely get split up in several actual prs after the initial merge to make things more reviewable flatten past tokio process history preserve authors but remove merge commits prefix commit names with process rewrite any references to issues prs in the old repo update license to match the tokio project update cargo toml references update changelog update readme run cargo fmt update to edition remove unneeded files travis yml appveyor yml etc move crate to its own directory enable ci for tokio process perform a one time merge commit not squash to preserve history update to std future bump version and set publish false add a link in the old repo to point to the new repo transfer any open issues to the new repo archive the old repo cc alexcrichton as an fyi
| 1
|
2,209
| 5,049,190,369
|
IssuesEvent
|
2016-12-20 15:15:25
|
cfpb/design-manual
|
https://api.github.com/repos/cfpb/design-manual
|
closed
|
Changes to Design Manual (Fall 2016)
|
process and planning
|
Hey ya'll,
I wanted to open up an issue for any questions or comments you might have on the changes that were addressed in the email from Adam and JP a few weeks ago. I will be scheduling a follow-up Design Manual meeting so we can discuss and provide answers, so please post away!
For your reference, the email is copied here:
Hi All,
We’d like to give you an update on the Design Manual (DM) and Capital Framework (CF). We’ve come a long way from where we were a year ago with the Design Manual surge as well as Capital Framework (woot!) and here is what we have in mind for the near-term:
DM and CF will remain separate for now: Merging DM and CF may be an ideal future state, but for now we should focus on completing the current backlog of work needed to support our projects.
**Going forward for the DM:**
- The Product Owner (Candice) for the DM manages new standard needs and attends backlog planning for CF.
- New approved standards will be added to the manual as draft images that the PO will prioritize into the CF backlog, no coding necessary.
- Hack days (defined at a later date) will help provide more time for teams to work, to help alleviate balancing project work / DM work.
**Going forward for CF:**
- A few FEWDs will be named as “primary maintainers” and asked to devote a percentage of their time to CF each week. This role will be completely voluntary, but project teams will accommodate for this in their planning.
- There will be alternating bi-weekly backlog refinement with Adam as Product Owner where we will organize and prioritize CF work.
- Expand the “definition of done” to include new component being available in the Design Manual.
- In the near term we will wrap up the Atomic work and going forward we will prioritize work based on the bi-weekly backlog refinement meeting and partnership with the DM team.
- Critical Capital Framework updates/needs could go through either the CF.gov Platform team or the Design Ops intake process and made part of their officially tracked work.
**Who is the audience for DM and CF?**
We strive for transparency in making all our work public and available, but the primary audience is us, our designers and developers. With that in mind we hope that it will help us continue to think of the work as being iterative.
We recognize there are a lot of questions that aren’t addressed here and new ones will arise as we move forward. Our process should always facilitate the best work and we will adjust wherever necessary. Please send us any feedback, questions, comments.
Thanks,
Adam & JP
|
1.0
|
Changes to Design Manual (Fall 2016) - Hey ya'll,
I wanted to open up an issue for any questions or comments you might have on the changes that were addressed in the email from Adam and JP a few weeks ago. I will be scheduling a follow-up Design Manual meeting so we can discuss and provide answers, so please post away!
For your reference, the email is copied here:
Hi All,
We’d like to give you an update on the Design Manual (DM) and Capital Framework (CF). We’ve come a long way from where we were a year ago with the Design Manual surge as well as Capital Framework (woot!) and here is what we have in mind for the near-term:
DM and CF will remain separate for now: Merging DM and CF may be an ideal future state, but for now we should focus on completing the current backlog of work needed to support our projects.
**Going forward for the DM:**
- The Product Owner (Candice) for the DM manages new standard needs and attends backlog planning for CF.
- New approved standards will be added to the manual as draft images that the PO will prioritize into the CF backlog, no coding necessary.
- Hack days (defined at a later date) will help provide more time for teams to work, to help alleviate balancing project work / DM work.
**Going forward for CF:**
- A few FEWDs will be named as “primary maintainers” and asked to devote a percentage of their time to CF each week. This role will be completely voluntary, but project teams will accommodate for this in their planning.
- There will be alternating bi-weekly backlog refinement with Adam as Product Owner where we will organize and prioritize CF work.
- Expand the “definition of done” to include new component being available in the Design Manual.
- In the near term we will wrap up the Atomic work and going forward we will prioritize work based on the bi-weekly backlog refinement meeting and partnership with the DM team.
- Critical Capital Framework updates/needs could go through either the CF.gov Platform team or the Design Ops intake process and made part of their officially tracked work.
**Who is the audience for DM and CF?**
We strive for transparency in making all our work public and available, but the primary audience is us, our designers and developers. With that in mind we hope that it will help us continue to think of the work as being iterative.
We recognize there are a lot of questions that aren’t addressed here and new ones will arise as we move forward. Our process should always facilitate the best work and we will adjust wherever necessary. Please send us any feedback, questions, comments.
Thanks,
Adam & JP
|
process
|
changes to design manual fall hey ya ll i wanted to open up an issue for any questions or comments you might have on the changes that were addressed in the email from adam and jp a few weeks ago i will be scheduling a follow up design manual meeting so we can discuss and provide answers so please post away for your reference the email is copied here hi all we’d like to give you an update on the design manual dm and capital framework cf we’ve come a long way from where we were a year ago with the design manual surge as well as capital framework woot and here is what we have in mind for the near term dm and cf will remain separate for now merging dm and cf may be an ideal future state but for now we should focus on completing the current backlog of work needed to support our projects going forward for the dm the product owner candice for the dm manages new standard needs and attends backlog planning for cf new approved standards will be added to the manual as draft images that the po will prioritize into the cf backlog no coding necessary hack days defined at a later date will help provide more time for teams to work to help alleviate balancing project work dm work going forward for cf a few fewds will be named as “primary maintainers” and asked to devote a percentage of their time to cf each week this role will be completely voluntary but project teams will accommodate for this in their planning there will be alternating bi weekly backlog refinement with adam as product owner where we will organize and prioritize cf work expand the “definition of done” to include new component being available in the design manual in the near term we will wrap up the atomic work and going forward we will prioritize work based on the bi weekly backlog refinement meeting and partnership with the dm team critical capital framework updates needs could go through either the cf gov platform team or the design ops intake process and made part of their officially tracked work who is the audience for dm and cf we strive for transparency in making all our work public and available but the primary audience is us our designers and developers with that in mind we hope that it will help us continue to think of the work as being iterative we recognize there are a lot of questions that aren’t addressed here and new ones will arise as we move forward our process should always facilitate the best work and we will adjust wherever necessary please send us any feedback questions comments thanks adam jp
| 1
|
570,109
| 17,019,153,704
|
IssuesEvent
|
2021-07-02 16:04:53
|
code-ready/crc
|
https://api.github.com/repos/code-ready/crc
|
closed
|
[BUG] Error : current disk image capacity is bigger than the requested size
|
kind/bug priority/minor
|
### General information
* OS: Linux
* Hypervisor: KVM
* Did you run `crc setup` before starting it (Yes/No)? YES
* Running CRC on: Baremetal-Server
## CRC version
```bash
CodeReady Containers version: 1.28.0+08de64bd
OpenShift version: 4.7.13 (embedded in executable)
```
## CRC status
```bash
Machine does not exist. Use 'crc start' to create it
```
## CRC config
```bash
- consent-telemetry : yes
- cpus : 15
- disk-size : 50
- memory : 60000
- pull-secret-file : /mnt/nvme_space1/mohit/pull-secret.txt
```
## Host Operating System
```bash
NAME="Ubuntu"
VERSION="20.10 (Groovy Gorilla)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.10"
VERSION_ID="20.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=groovy
UBUNTU_CODENAME=groovy
```
### Steps to reproduce
```
crc config set cpus 15
crc config set memory 60000
crc config set disk-size 50
crc config set pull-secret-file ~/pull-secret.txt
crc config set consent-telemetry yes
crc config view
crc setup
crc start
```
### Expected
- CRC VM should start cleanly with root disk size of 50GB
### Actual
- Unable to start CRC, because of error `Error creating machine: Error in driver during machine creation: current disk image capacity is bigger than the requested size (75161927680 > 53687091200)`
### Logs
```
[karan@beast ~]$ crc start
INFO Checking if running as non-root
INFO Checking if running inside WSL2
INFO Checking if crc-admin-helper executable is cached
INFO Checking for obsolete admin-helper executable
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if active user/process is currently part of the libvirt group
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking if AppArmor is configured
INFO Checking if systemd-networkd is running
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if dnsmasq configurations file exist for NetworkManager
INFO Checking if the systemd-resolved service is running
INFO Checking if /etc/NetworkManager/dispatcher.d/99-crc.sh exists
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Loading bundle: crc_libvirt_4.7.13...
INFO Creating CodeReady Containers VM for OpenShift 4.7.13...
Error creating machine: Error in driver during machine creation: current disk image capacity is bigger than the requested size (75161927680 > 53687091200)
[karan@beast ~]$
```
|
1.0
|
[BUG] Error : current disk image capacity is bigger than the requested size - ### General information
* OS: Linux
* Hypervisor: KVM
* Did you run `crc setup` before starting it (Yes/No)? YES
* Running CRC on: Baremetal-Server
## CRC version
```bash
CodeReady Containers version: 1.28.0+08de64bd
OpenShift version: 4.7.13 (embedded in executable)
```
## CRC status
```bash
Machine does not exist. Use 'crc start' to create it
```
## CRC config
```bash
- consent-telemetry : yes
- cpus : 15
- disk-size : 50
- memory : 60000
- pull-secret-file : /mnt/nvme_space1/mohit/pull-secret.txt
```
## Host Operating System
```bash
NAME="Ubuntu"
VERSION="20.10 (Groovy Gorilla)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.10"
VERSION_ID="20.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=groovy
UBUNTU_CODENAME=groovy
```
### Steps to reproduce
```
crc config set cpus 15
crc config set memory 60000
crc config set disk-size 50
crc config set pull-secret-file ~/pull-secret.txt
crc config set consent-telemetry yes
crc config view
crc setup
crc start
```
### Expected
- CRC VM should start cleanly with root disk size of 50GB
### Actual
- Unable to start CRC, because of error `Error creating machine: Error in driver during machine creation: current disk image capacity is bigger than the requested size (75161927680 > 53687091200)`
### Logs
```
[karan@beast ~]$ crc start
INFO Checking if running as non-root
INFO Checking if running inside WSL2
INFO Checking if crc-admin-helper executable is cached
INFO Checking for obsolete admin-helper executable
INFO Checking if running on a supported CPU architecture
INFO Checking minimum RAM requirements
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if active user/process is currently part of the libvirt group
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking if AppArmor is configured
INFO Checking if systemd-networkd is running
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if dnsmasq configurations file exist for NetworkManager
INFO Checking if the systemd-resolved service is running
INFO Checking if /etc/NetworkManager/dispatcher.d/99-crc.sh exists
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Loading bundle: crc_libvirt_4.7.13...
INFO Creating CodeReady Containers VM for OpenShift 4.7.13...
Error creating machine: Error in driver during machine creation: current disk image capacity is bigger than the requested size (75161927680 > 53687091200)
[karan@beast ~]$
```
|
non_process
|
error current disk image capacity is bigger than the requested size general information os linux hypervisor kvm did you run crc setup before starting it yes no yes running crc on baremetal server crc version bash codeready containers version openshift version embedded in executable crc status bash machine does not exist use crc start to create it crc config bash consent telemetry yes cpus disk size memory pull secret file mnt nvme mohit pull secret txt host operating system bash name ubuntu version groovy gorilla id ubuntu id like debian pretty name ubuntu version id home url support url bug report url privacy policy url version codename groovy ubuntu codename groovy steps to reproduce crc config set cpus crc config set memory crc config set disk size crc config set pull secret file pull secret txt crc config set consent telemetry yes crc config view crc setup crc start expected crc vm should start cleanly with root disk size of actual unable to start crc because of error error creating machine error in driver during machine creation current disk image capacity is bigger than the requested size logs crc start info checking if running as non root info checking if running inside info checking if crc admin helper executable is cached info checking for obsolete admin helper executable info checking if running on a supported cpu architecture info checking minimum ram requirements info checking if virtualization is enabled info checking if kvm is enabled info checking if libvirt is installed info checking if user is part of libvirt group info checking if active user process is currently part of the libvirt group info checking if libvirt daemon is running info checking if a supported libvirt version is installed info checking if crc driver libvirt is installed info checking if apparmor is configured info checking if systemd networkd is running info checking if networkmanager is installed info checking if networkmanager service is running info checking if dnsmasq configurations file exist for networkmanager info checking if the systemd resolved service is running info checking if etc networkmanager dispatcher d crc sh exists info checking if libvirt crc network is available info checking if libvirt crc network is active info loading bundle crc libvirt info creating codeready containers vm for openshift error creating machine error in driver during machine creation current disk image capacity is bigger than the requested size
| 0
|
22,644
| 31,895,826,789
|
IssuesEvent
|
2023-09-18 01:31:53
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - earliestAgeOrLowestStage
|
Term - change Class - GeologicalContext normative Task Group - Material Sample Process - complete
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_earliestAgeOrLowestStage
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): earliestAgeOrLowestStage
* Term label (English, not normative): Earliest Age Or Lowest Stage
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the earliest possible geochronologic age or lowest chronostratigraphic stage attributable to the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Atlantic, Boreal, Skullrockian
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
Change term - earliestAgeOrLowestStage - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_earliestAgeOrLowestStage
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): earliestAgeOrLowestStage
* Term label (English, not normative): Earliest Age Or Lowest Stage
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the earliest possible geochronologic age or lowest chronostratigraphic stage attributable to the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Atlantic, Boreal, Skullrockian
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
change term earliestageorloweststage term change submitter efficacy justification why is this change necessary create consistency of terms for material in darwin core demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes earliestageorloweststage term label english not normative earliest age or lowest stage organized in class e g occurrence event location taxon geological context definition of the term normative the full name of the earliest possible geochronologic age or lowest chronostratigraphic stage attributable to the stratigraphic horizon from which the cataloged item dwc materialentity was collected usage comments recommendations regarding content etc not normative examples not normative atlantic boreal skullrockian refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
8,032
| 11,210,759,054
|
IssuesEvent
|
2020-01-06 14:00:05
|
shirou/gopsutil
|
https://api.github.com/repos/shirou/gopsutil
|
closed
|
Monitor Process Usage(watching changes on process)
|
package:process question
|
How i can calculate process usage
I have did this
```
cmd := exec.Command("someCommand")
cmd.Start()
p, _ := process.NewProcess(int32(cmd.Process.Pid))
fmt.Println(p.MemoryInfo())
fmt.Println(p.CPUPercent())
```
|
1.0
|
Monitor Process Usage(watching changes on process) - How i can calculate process usage
I have did this
```
cmd := exec.Command("someCommand")
cmd.Start()
p, _ := process.NewProcess(int32(cmd.Process.Pid))
fmt.Println(p.MemoryInfo())
fmt.Println(p.CPUPercent())
```
|
process
|
monitor process usage watching changes on process how i can calculate process usage i have did this cmd exec command somecommand cmd start p process newprocess cmd process pid fmt println p memoryinfo fmt println p cpupercent
| 1
|
7,360
| 10,509,173,196
|
IssuesEvent
|
2019-09-27 10:19:14
|
prisma/studio
|
https://api.github.com/repos/prisma/studio
|
closed
|
Reloading the browser crashes Studio
|
bug/2-confirmed process/candidate
|
The whole dev command crashes with this stack trace
```
Error: WebSocket is not open: readyState 3 (CLOSED)
at WebSocket.send (/usr/local/lib/node_modules/prisma2/build/index.js:165450:19)
at ChildProcess.photonWorker.on.msg (/usr/local/lib/node_modules/prisma2/build/index.js:652:60)
at ChildProcess.emit (events.js:198:13)
at ChildProcess.EventEmitter.emit (domain.js:448:20)
at emit (internal/child_process.js:832:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
```
|
1.0
|
Reloading the browser crashes Studio - The whole dev command crashes with this stack trace
```
Error: WebSocket is not open: readyState 3 (CLOSED)
at WebSocket.send (/usr/local/lib/node_modules/prisma2/build/index.js:165450:19)
at ChildProcess.photonWorker.on.msg (/usr/local/lib/node_modules/prisma2/build/index.js:652:60)
at ChildProcess.emit (events.js:198:13)
at ChildProcess.EventEmitter.emit (domain.js:448:20)
at emit (internal/child_process.js:832:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
```
|
process
|
reloading the browser crashes studio the whole dev command crashes with this stack trace error websocket is not open readystate closed at websocket send usr local lib node modules build index js at childprocess photonworker on msg usr local lib node modules build index js at childprocess emit events js at childprocess eventemitter emit domain js at emit internal child process js at process tickcallback internal process next tick js
| 1
|
60,808
| 14,919,242,045
|
IssuesEvent
|
2021-01-22 23:33:05
|
arfc/cairo
|
https://api.github.com/repos/arfc/cairo
|
closed
|
[Good First Issue] Add tests for lorenz96.py
|
Comp:Build Difficulty:1-Beginner Priority:3-Desired Status:1-New Type:enhancement
|
This issue can be closed when tests have been added for ``lorenz96.py`` in a file called ``tests/test_lorenz96.py``
|
1.0
|
[Good First Issue] Add tests for lorenz96.py - This issue can be closed when tests have been added for ``lorenz96.py`` in a file called ``tests/test_lorenz96.py``
|
non_process
|
add tests for py this issue can be closed when tests have been added for py in a file called tests test py
| 0
|
12,363
| 14,891,357,160
|
IssuesEvent
|
2021-01-21 00:41:06
|
fastlane/fastlane
|
https://api.github.com/repos/fastlane/fastlane
|
closed
|
Snapshot not saving image files on Apple Silicon M1 (ARM64)
|
tool: snapshot topic: m1 processor
|
### New Issue Checklist
- [x] Updated fastlane to the latest version
- [x] I read the [Contribution Guidelines](https://github.com/fastlane/fastlane/blob/master/CONTRIBUTING.md)
- [x] I read [docs.fastlane.tools](https://docs.fastlane.tools)
- [x] I searched for [existing GitHub issues](https://github.com/fastlane/fastlane/issues)
### Issue Description
As mentioned by @MetaImi in #17039 fastlane is not saving the generated snapshot image files on M1 Macs due to a missing check inside the getCacheDirectory() function in the SnapshotHelper.swift file.
The function is only checking for architectures _i386_ and _x86_64_, but Apple Silicon is recognized as _arm64_, so the check needs to be modified to make it work again:
```swift
class func getCacheDirectory() throws -> URL {
let cachePath = "Library/Caches/tools.fastlane"
// on OSX config is stored in /Users/<username>/Library
// and on iOS/tvOS/WatchOS it's in simulator's home dir
#if os(OSX)
let homeDir = URL(fileURLWithPath: NSHomeDirectory())
return homeDir.appendingPathComponent(cachePath)
#elseif arch(i386) || arch(x86_64) || arch(arm64)
guard let simulatorHostHome = ProcessInfo().environment["SIMULATOR_HOST_HOME"] else {
throw SnapshotError.cannotFindSimulatorHomeDirectory
}
let homeDir = URL(fileURLWithPath: simulatorHostHome)
return homeDir.appendingPathComponent(cachePath)
#else
throw SnapshotError.cannotRunOnPhysicalDevice
#endif
}
```
Fastlane itself is running fine without any errors, other than "failed" icons in console output, but the image files are missing inside the desired directory.
##### Command executed
fastlane screenshots
##### Complete output when running fastlane, including the stack trace and command used
Normal run. Only noticeable thing:
```
+---------------------------------------+-------+-------+
| snapshot results |
+---------------------------------------+-------+-------+
| Device | en-US | de-DE |
+---------------------------------------+-------+-------+
| iPhone 8 Plus | ❌ | ❌ |
| iPhone 11 Pro Max | ❌ | ❌ |
| iPad Pro (12.9-inch) (4th generation) | ❌ | ❌ |
| iPad Pro (12.9-inch) (2nd generation) | ❌ | ❌ |
+---------------------------------------+-------+-------+
```
### Environment
<!--
Please run `fastlane env` and copy the output below. This will help us help you.
If you used the `--capture_output` option, please remove this block as it is already included there.
-->
<details><summary>🚫 fastlane environment 🚫</summary>
### Stack
| Key | Value |
| --------------------------- | -------------------------------------------------------------- |
| OS | 11.1 |
| Ruby | 2.6.3 |
| Bundler? | false |
| Git | git version 2.24.3 (Apple Git-128) |
| Installation Source | /usr/local/bin/fastlane |
| Host | macOS 11.1 (20C69) |
| Ruby Lib Dir | /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib |
| OpenSSL Version | LibreSSL 2.8.3 |
| Is contained | false |
| Is homebrew | false |
| Is installed via Fabric.app | false |
| Xcode Path | /Applications/Xcode.app/Contents/Developer/ |
| Xcode Version | 12.3 |
### System Locale
| Error |
| --------------------------- |
| No Locale with UTF8 found 🚫 |
### fastlane files:
<details><summary>`./fastlane/Fastfile`</summary>
```ruby
# This file contains the fastlane.tools configuration
# You can find the documentation at https://docs.fastlane.tools
#
# For a list of all available actions, check out
#
# https://docs.fastlane.tools/actions
#
# For a list of all available plugins, check out
#
# https://docs.fastlane.tools/plugins/available-plugins
#
# Uncomment the line if you want fastlane to automatically update itself
# update_fastlane
default_platform(:ios)
platform :ios do
desc "Push a new beta build to TestFlight"
lane :beta do
increment_build_number(xcodeproj: "ThreeT.xcodeproj")
build_app(scheme: "ThreeT")
#upload_to_testflight
end
end
lane :screenshots do
capture_screenshots(
dark_mode: false,
output_directory: "./fastlane/screenshots/light"
)
capture_screenshots(
dark_mode: true,
output_directory: "./fastlane/screenshots/dark"
)
end
```
</details>
### fastlane gems
| Gem | Version | Update-Status |
| -------- | ------- | ------------- |
| fastlane | 2.170.0 | ✅ Up-To-Date |
### Loaded fastlane plugins:
**No plugins Loaded**
<details><summary><b>Loaded gems</b></summary>
| Gem | Version |
| ------------------------- | ------------ |
| did_you_mean | 1.3.0 |
| slack-notifier | 2.3.2 |
| atomos | 0.1.3 |
| CFPropertyList | 2.3.6 |
| claide | 1.0.3 |
| colored2 | 3.1.2 |
| nanaimo | 0.3.0 |
| xcodeproj | 1.19.0 |
| rouge | 2.0.7 |
| xcpretty | 0.3.0 |
| terminal-notifier | 2.0.0 |
| unicode-display_width | 1.7.0 |
| terminal-table | 1.8.0 |
| plist | 3.5.0 |
| public_suffix | 4.0.6 |
| addressable | 2.7.0 |
| multipart-post | 2.0.0 |
| word_wrap | 1.0.0 |
| tty-screen | 0.8.1 |
| tty-cursor | 0.7.1 |
| tty-spinner | 0.9.3 |
| babosa | 1.0.4 |
| colored | 1.2 |
| highline | 1.7.10 |
| commander-fastlane | 4.4.6 |
| excon | 0.78.1 |
| ruby2_keywords | 0.0.2 |
| faraday | 1.2.0 |
| unf_ext | 0.0.7.7 |
| unf | 0.1.4 |
| domain_name | 0.5.20190701 |
| http-cookie | 1.0.3 |
| faraday-cookie_jar | 0.0.7 |
| faraday_middleware | 1.0.0 |
| fastimage | 2.2.0 |
| gh_inspector | 1.1.3 |
| mini_magick | 4.11.0 |
| rubyzip | 2.3.0 |
| security | 0.1.3 |
| xcpretty-travis-formatter | 1.0.0 |
| dotenv | 2.7.6 |
| naturally | 2.2.0 |
| simctl | 1.6.8 |
| jwt | 2.2.2 |
| uber | 0.1.0 |
| declarative | 0.0.20 |
| declarative-option | 0.1.0 |
| representable | 3.0.4 |
| retriable | 3.1.2 |
| mini_mime | 1.0.2 |
| multi_json | 1.15.0 |
| signet | 0.14.0 |
| memoist | 0.16.2 |
| os | 1.1.1 |
| googleauth | 0.14.0 |
| httpclient | 2.8.3 |
| google-api-client | 0.38.0 |
| google-cloud-env | 1.4.0 |
| google-cloud-errors | 1.0.1 |
| google-cloud-core | 1.5.0 |
| rake | 12.3.2 |
| digest-crc | 0.6.3 |
| google-cloud-storage | 1.29.2 |
| emoji_regex | 3.2.1 |
| jmespath | 1.4.0 |
| aws-partitions | 1.413.0 |
| aws-eventstream | 1.1.0 |
| aws-sigv4 | 1.2.2 |
| aws-sdk-core | 3.110.0 |
| aws-sdk-kms | 1.40.0 |
| aws-sdk-s3 | 1.87.0 |
| json | 2.3.1 |
| bundler | 2.1.4 |
| forwardable | 1.2.0 |
| logger | 1.3.0 |
| date | 2.0.0 |
| stringio | 0.0.2 |
| ipaddr | 1.2.2 |
| openssl | 2.1.2 |
| ostruct | 0.1.0 |
| strscan | 1.0.0 |
| fileutils | 1.1.0 |
| etc | 1.0.1 |
| io-console | 0.4.7 |
| zlib | 1.0.0 |
| libxml-ruby | 3.1.0 |
| rexml | 3.2.4 |
| psych | 3.1.0 |
| mutex_m | 0.1.0 |
</details>
*generated on:* **2020-12-27**
</details>
|
1.0
|
Snapshot not saving image files on Apple Silicon M1 (ARM64) - ### New Issue Checklist
- [x] Updated fastlane to the latest version
- [x] I read the [Contribution Guidelines](https://github.com/fastlane/fastlane/blob/master/CONTRIBUTING.md)
- [x] I read [docs.fastlane.tools](https://docs.fastlane.tools)
- [x] I searched for [existing GitHub issues](https://github.com/fastlane/fastlane/issues)
### Issue Description
As mentioned by @MetaImi in #17039 fastlane is not saving the generated snapshot image files on M1 Macs due to a missing check inside the getCacheDirectory() function in the SnapshotHelper.swift file.
The function is only checking for architectures _i386_ and _x86_64_, but Apple Silicon is recognized as _arm64_, so the check needs to be modified to make it work again:
```swift
class func getCacheDirectory() throws -> URL {
let cachePath = "Library/Caches/tools.fastlane"
// on OSX config is stored in /Users/<username>/Library
// and on iOS/tvOS/WatchOS it's in simulator's home dir
#if os(OSX)
let homeDir = URL(fileURLWithPath: NSHomeDirectory())
return homeDir.appendingPathComponent(cachePath)
#elseif arch(i386) || arch(x86_64) || arch(arm64)
guard let simulatorHostHome = ProcessInfo().environment["SIMULATOR_HOST_HOME"] else {
throw SnapshotError.cannotFindSimulatorHomeDirectory
}
let homeDir = URL(fileURLWithPath: simulatorHostHome)
return homeDir.appendingPathComponent(cachePath)
#else
throw SnapshotError.cannotRunOnPhysicalDevice
#endif
}
```
Fastlane itself is running fine without any errors, other than "failed" icons in console output, but the image files are missing inside the desired directory.
##### Command executed
fastlane screenshots
##### Complete output when running fastlane, including the stack trace and command used
Normal run. Only noticeable thing:
```
+---------------------------------------+-------+-------+
| snapshot results |
+---------------------------------------+-------+-------+
| Device | en-US | de-DE |
+---------------------------------------+-------+-------+
| iPhone 8 Plus | ❌ | ❌ |
| iPhone 11 Pro Max | ❌ | ❌ |
| iPad Pro (12.9-inch) (4th generation) | ❌ | ❌ |
| iPad Pro (12.9-inch) (2nd generation) | ❌ | ❌ |
+---------------------------------------+-------+-------+
```
### Environment
<!--
Please run `fastlane env` and copy the output below. This will help us help you.
If you used the `--capture_output` option, please remove this block as it is already included there.
-->
<details><summary>🚫 fastlane environment 🚫</summary>
### Stack
| Key | Value |
| --------------------------- | -------------------------------------------------------------- |
| OS | 11.1 |
| Ruby | 2.6.3 |
| Bundler? | false |
| Git | git version 2.24.3 (Apple Git-128) |
| Installation Source | /usr/local/bin/fastlane |
| Host | macOS 11.1 (20C69) |
| Ruby Lib Dir | /System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib |
| OpenSSL Version | LibreSSL 2.8.3 |
| Is contained | false |
| Is homebrew | false |
| Is installed via Fabric.app | false |
| Xcode Path | /Applications/Xcode.app/Contents/Developer/ |
| Xcode Version | 12.3 |
### System Locale
| Error |
| --------------------------- |
| No Locale with UTF8 found 🚫 |
### fastlane files:
<details><summary>`./fastlane/Fastfile`</summary>
```ruby
# This file contains the fastlane.tools configuration
# You can find the documentation at https://docs.fastlane.tools
#
# For a list of all available actions, check out
#
# https://docs.fastlane.tools/actions
#
# For a list of all available plugins, check out
#
# https://docs.fastlane.tools/plugins/available-plugins
#
# Uncomment the line if you want fastlane to automatically update itself
# update_fastlane
default_platform(:ios)
platform :ios do
desc "Push a new beta build to TestFlight"
lane :beta do
increment_build_number(xcodeproj: "ThreeT.xcodeproj")
build_app(scheme: "ThreeT")
#upload_to_testflight
end
end
lane :screenshots do
capture_screenshots(
dark_mode: false,
output_directory: "./fastlane/screenshots/light"
)
capture_screenshots(
dark_mode: true,
output_directory: "./fastlane/screenshots/dark"
)
end
```
</details>
### fastlane gems
| Gem | Version | Update-Status |
| -------- | ------- | ------------- |
| fastlane | 2.170.0 | ✅ Up-To-Date |
### Loaded fastlane plugins:
**No plugins Loaded**
<details><summary><b>Loaded gems</b></summary>
| Gem | Version |
| ------------------------- | ------------ |
| did_you_mean | 1.3.0 |
| slack-notifier | 2.3.2 |
| atomos | 0.1.3 |
| CFPropertyList | 2.3.6 |
| claide | 1.0.3 |
| colored2 | 3.1.2 |
| nanaimo | 0.3.0 |
| xcodeproj | 1.19.0 |
| rouge | 2.0.7 |
| xcpretty | 0.3.0 |
| terminal-notifier | 2.0.0 |
| unicode-display_width | 1.7.0 |
| terminal-table | 1.8.0 |
| plist | 3.5.0 |
| public_suffix | 4.0.6 |
| addressable | 2.7.0 |
| multipart-post | 2.0.0 |
| word_wrap | 1.0.0 |
| tty-screen | 0.8.1 |
| tty-cursor | 0.7.1 |
| tty-spinner | 0.9.3 |
| babosa | 1.0.4 |
| colored | 1.2 |
| highline | 1.7.10 |
| commander-fastlane | 4.4.6 |
| excon | 0.78.1 |
| ruby2_keywords | 0.0.2 |
| faraday | 1.2.0 |
| unf_ext | 0.0.7.7 |
| unf | 0.1.4 |
| domain_name | 0.5.20190701 |
| http-cookie | 1.0.3 |
| faraday-cookie_jar | 0.0.7 |
| faraday_middleware | 1.0.0 |
| fastimage | 2.2.0 |
| gh_inspector | 1.1.3 |
| mini_magick | 4.11.0 |
| rubyzip | 2.3.0 |
| security | 0.1.3 |
| xcpretty-travis-formatter | 1.0.0 |
| dotenv | 2.7.6 |
| naturally | 2.2.0 |
| simctl | 1.6.8 |
| jwt | 2.2.2 |
| uber | 0.1.0 |
| declarative | 0.0.20 |
| declarative-option | 0.1.0 |
| representable | 3.0.4 |
| retriable | 3.1.2 |
| mini_mime | 1.0.2 |
| multi_json | 1.15.0 |
| signet | 0.14.0 |
| memoist | 0.16.2 |
| os | 1.1.1 |
| googleauth | 0.14.0 |
| httpclient | 2.8.3 |
| google-api-client | 0.38.0 |
| google-cloud-env | 1.4.0 |
| google-cloud-errors | 1.0.1 |
| google-cloud-core | 1.5.0 |
| rake | 12.3.2 |
| digest-crc | 0.6.3 |
| google-cloud-storage | 1.29.2 |
| emoji_regex | 3.2.1 |
| jmespath | 1.4.0 |
| aws-partitions | 1.413.0 |
| aws-eventstream | 1.1.0 |
| aws-sigv4 | 1.2.2 |
| aws-sdk-core | 3.110.0 |
| aws-sdk-kms | 1.40.0 |
| aws-sdk-s3 | 1.87.0 |
| json | 2.3.1 |
| bundler | 2.1.4 |
| forwardable | 1.2.0 |
| logger | 1.3.0 |
| date | 2.0.0 |
| stringio | 0.0.2 |
| ipaddr | 1.2.2 |
| openssl | 2.1.2 |
| ostruct | 0.1.0 |
| strscan | 1.0.0 |
| fileutils | 1.1.0 |
| etc | 1.0.1 |
| io-console | 0.4.7 |
| zlib | 1.0.0 |
| libxml-ruby | 3.1.0 |
| rexml | 3.2.4 |
| psych | 3.1.0 |
| mutex_m | 0.1.0 |
</details>
*generated on:* **2020-12-27**
</details>
|
process
|
snapshot not saving image files on apple silicon new issue checklist updated fastlane to the latest version i read the i read i searched for issue description as mentioned by metaimi in fastlane is not saving the generated snapshot image files on macs due to a missing check inside the getcachedirectory function in the snapshothelper swift file the function is only checking for architectures and but apple silicon is recognized as so the check needs to be modified to make it work again swift class func getcachedirectory throws url let cachepath library caches tools fastlane on osx config is stored in users library and on ios tvos watchos it s in simulator s home dir if os osx let homedir url fileurlwithpath nshomedirectory return homedir appendingpathcomponent cachepath elseif arch arch arch guard let simulatorhosthome processinfo environment else throw snapshoterror cannotfindsimulatorhomedirectory let homedir url fileurlwithpath simulatorhosthome return homedir appendingpathcomponent cachepath else throw snapshoterror cannotrunonphysicaldevice endif fastlane itself is running fine without any errors other than failed icons in console output but the image files are missing inside the desired directory command executed fastlane screenshots complete output when running fastlane including the stack trace and command used normal run only noticeable thing snapshot results device en us de de iphone plus ❌ ❌ iphone pro max ❌ ❌ ipad pro inch generation ❌ ❌ ipad pro inch generation ❌ ❌ environment please run fastlane env and copy the output below this will help us help you if you used the capture output option please remove this block as it is already included there 🚫 fastlane environment 🚫 stack key value os ruby bundler false git git version apple git installation source usr local bin fastlane host macos ruby lib dir system library frameworks ruby framework versions usr lib openssl version libressl is contained false is homebrew false is installed via fabric app false xcode path applications xcode app contents developer xcode version system locale error no locale with found 🚫 fastlane files fastlane fastfile ruby this file contains the fastlane tools configuration you can find the documentation at for a list of all available actions check out for a list of all available plugins check out uncomment the line if you want fastlane to automatically update itself update fastlane default platform ios platform ios do desc push a new beta build to testflight lane beta do increment build number xcodeproj threet xcodeproj build app scheme threet upload to testflight end end lane screenshots do capture screenshots dark mode false output directory fastlane screenshots light capture screenshots dark mode true output directory fastlane screenshots dark end fastlane gems gem version update status fastlane ✅ up to date loaded fastlane plugins no plugins loaded loaded gems gem version did you mean slack notifier atomos cfpropertylist claide nanaimo xcodeproj rouge xcpretty terminal notifier unicode display width terminal table plist public suffix addressable multipart post word wrap tty screen tty cursor tty spinner babosa colored highline commander fastlane excon keywords faraday unf ext unf domain name http cookie faraday cookie jar faraday middleware fastimage gh inspector mini magick rubyzip security xcpretty travis formatter dotenv naturally simctl jwt uber declarative declarative option representable retriable mini mime multi json signet memoist os googleauth httpclient google api client google cloud env google cloud errors google cloud core rake digest crc google cloud storage emoji regex jmespath aws partitions aws eventstream aws aws sdk core aws sdk kms aws sdk json bundler forwardable logger date stringio ipaddr openssl ostruct strscan fileutils etc io console zlib libxml ruby rexml psych mutex m generated on
| 1
|
17,009
| 22,386,211,605
|
IssuesEvent
|
2022-06-17 00:51:04
|
figlesias221/ProyectoDevOps_Grupo3_IglesiasPerezMolinoloJuan
|
https://api.github.com/repos/figlesias221/ProyectoDevOps_Grupo3_IglesiasPerezMolinoloJuan
|
closed
|
Review error de fechas en reserva de hospedajes
|
task process
|
Cada product owner debe seguir como guía los escenarios descritos en Gherkin
Esfuerzo en HS-P (por persona):
Estimado: 1
Real: 1 (@mperezjodal )
|
1.0
|
Review error de fechas en reserva de hospedajes - Cada product owner debe seguir como guía los escenarios descritos en Gherkin
Esfuerzo en HS-P (por persona):
Estimado: 1
Real: 1 (@mperezjodal )
|
process
|
review error de fechas en reserva de hospedajes cada product owner debe seguir como guía los escenarios descritos en gherkin esfuerzo en hs p por persona estimado real mperezjodal
| 1
|
5,039
| 7,855,117,682
|
IssuesEvent
|
2018-06-20 23:43:41
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE] Fix incorrectly swapper layer_property function min_scale and max_scale values
|
Automatic new feature Processing ToDocOrNotToDoc?
|
Original commit: https://github.com/qgis/QGIS/commit/cd9a802c5c3838efae90b550192f5de89e5b9293 by nyalldawson
The values returned were the opposite of what's shown in the GUI.
Marked as feature as reminder to include this project break in the
release notes
|
1.0
|
[FEATURE] Fix incorrectly swapper layer_property function min_scale and max_scale values - Original commit: https://github.com/qgis/QGIS/commit/cd9a802c5c3838efae90b550192f5de89e5b9293 by nyalldawson
The values returned were the opposite of what's shown in the GUI.
Marked as feature as reminder to include this project break in the
release notes
|
process
|
fix incorrectly swapper layer property function min scale and max scale values original commit by nyalldawson the values returned were the opposite of what s shown in the gui marked as feature as reminder to include this project break in the release notes
| 1
|
703,440
| 24,159,771,999
|
IssuesEvent
|
2022-09-22 10:37:57
|
harvester/harvester
|
https://api.github.com/repos/harvester/harvester
|
closed
|
[FEATURE] Harvester supports kube-audit log
|
kind/enhancement priority/0
|
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Requirement: https://github.com/harvester/harvester/issues/578
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
Collect (kubernetes) `audit` log by default.
<del>Send `audit` log to `loki` by default, which can be queried by embedded `grafana`.</del>
Support send `audit` logs to general log servers.
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
HEP PR:
https://github.com/harvester/harvester/pull/2684
HEP 578 event and audit log [CI SKIP] #2684
|
1.0
|
[FEATURE] Harvester supports kube-audit log - **Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Requirement: https://github.com/harvester/harvester/issues/578
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
Collect (kubernetes) `audit` log by default.
<del>Send `audit` log to `loki` by default, which can be queried by embedded `grafana`.</del>
Support send `audit` logs to general log servers.
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
HEP PR:
https://github.com/harvester/harvester/pull/2684
HEP 578 event and audit log [CI SKIP] #2684
|
non_process
|
harvester supports kube audit log is your feature request related to a problem please describe requirement describe the solution you d like collect kubernetes audit log by default send audit log to loki by default which can be queried by embedded grafana support send audit logs to general log servers describe alternatives you ve considered additional context hep pr hep event and audit log
| 0
|
4,918
| 7,794,379,694
|
IssuesEvent
|
2018-06-08 02:16:30
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Use the Morrison microphysics in the CLUBB SCM lines that appear on the SAM_CLUBB nightly plots (Trac #32)
|
Migrated from Trac post_processing senkbeil@uwm.edu task
|
Currently, in the SAM_CLUBB nightly plots, Morrison microphysics is used for the SAM_CLUBB lines, but COAMPS or KK microphysics is used for the CLUBB SCM lines that appear in the SAM_CLUBB nightly plots.
When the Morrison microphysics works in CLUBB, then perhaps we could use Morrison microphysics in all the SAM_CLUBB nightly plots (including CLUBB SCM lines) except one which will use the SAM 1MOM microphysics, but use a mixture of COAMPS, KK, and possibly Morrison in the CLUBB nightly plots. This will allow us to test Morrison in CLUBB standalone, because CLUBB standalone lines appear on the SAM_CLUBB nightly plots.
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/32
```json
{
"status": "closed",
"changetime": "2009-05-29T20:38:21",
"description": "Currently, in the SAM_CLUBB nightly plots, Morrison microphysics is used for the SAM_CLUBB lines, but COAMPS or KK microphysics is used for the CLUBB SCM lines that appear in the SAM_CLUBB nightly plots.\n\nWhen the Morrison microphysics works in CLUBB, then perhaps we could use Morrison microphysics in all the SAM_CLUBB nightly plots (including CLUBB SCM lines) except one which will use the SAM 1MOM microphysics, but use a mixture of COAMPS, KK, and possibly Morrison in the CLUBB nightly plots. This will allow us to test Morrison in CLUBB standalone, because CLUBB standalone lines appear on the SAM_CLUBB nightly plots. ",
"reporter": "vlarson@uwm.edu",
"cc": "dschanen@uwm.edu",
"resolution": "Verified by V. Larson",
"_ts": "1243629501000000",
"component": "post_processing",
"summary": "Use the Morrison microphysics in the CLUBB SCM lines that appear on the SAM_CLUBB nightly plots",
"priority": "minor",
"keywords": "SAM_CLUBB, nightly plots, Morrison microphysics",
"time": "2009-05-15T18:42:05",
"milestone": "",
"owner": "senkbeil@uwm.edu",
"type": "task"
}
```
|
1.0
|
Use the Morrison microphysics in the CLUBB SCM lines that appear on the SAM_CLUBB nightly plots (Trac #32) - Currently, in the SAM_CLUBB nightly plots, Morrison microphysics is used for the SAM_CLUBB lines, but COAMPS or KK microphysics is used for the CLUBB SCM lines that appear in the SAM_CLUBB nightly plots.
When the Morrison microphysics works in CLUBB, then perhaps we could use Morrison microphysics in all the SAM_CLUBB nightly plots (including CLUBB SCM lines) except one which will use the SAM 1MOM microphysics, but use a mixture of COAMPS, KK, and possibly Morrison in the CLUBB nightly plots. This will allow us to test Morrison in CLUBB standalone, because CLUBB standalone lines appear on the SAM_CLUBB nightly plots.
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/32
```json
{
"status": "closed",
"changetime": "2009-05-29T20:38:21",
"description": "Currently, in the SAM_CLUBB nightly plots, Morrison microphysics is used for the SAM_CLUBB lines, but COAMPS or KK microphysics is used for the CLUBB SCM lines that appear in the SAM_CLUBB nightly plots.\n\nWhen the Morrison microphysics works in CLUBB, then perhaps we could use Morrison microphysics in all the SAM_CLUBB nightly plots (including CLUBB SCM lines) except one which will use the SAM 1MOM microphysics, but use a mixture of COAMPS, KK, and possibly Morrison in the CLUBB nightly plots. This will allow us to test Morrison in CLUBB standalone, because CLUBB standalone lines appear on the SAM_CLUBB nightly plots. ",
"reporter": "vlarson@uwm.edu",
"cc": "dschanen@uwm.edu",
"resolution": "Verified by V. Larson",
"_ts": "1243629501000000",
"component": "post_processing",
"summary": "Use the Morrison microphysics in the CLUBB SCM lines that appear on the SAM_CLUBB nightly plots",
"priority": "minor",
"keywords": "SAM_CLUBB, nightly plots, Morrison microphysics",
"time": "2009-05-15T18:42:05",
"milestone": "",
"owner": "senkbeil@uwm.edu",
"type": "task"
}
```
|
process
|
use the morrison microphysics in the clubb scm lines that appear on the sam clubb nightly plots trac currently in the sam clubb nightly plots morrison microphysics is used for the sam clubb lines but coamps or kk microphysics is used for the clubb scm lines that appear in the sam clubb nightly plots when the morrison microphysics works in clubb then perhaps we could use morrison microphysics in all the sam clubb nightly plots including clubb scm lines except one which will use the sam microphysics but use a mixture of coamps kk and possibly morrison in the clubb nightly plots this will allow us to test morrison in clubb standalone because clubb standalone lines appear on the sam clubb nightly plots attachments migrated from json status closed changetime description currently in the sam clubb nightly plots morrison microphysics is used for the sam clubb lines but coamps or kk microphysics is used for the clubb scm lines that appear in the sam clubb nightly plots n nwhen the morrison microphysics works in clubb then perhaps we could use morrison microphysics in all the sam clubb nightly plots including clubb scm lines except one which will use the sam microphysics but use a mixture of coamps kk and possibly morrison in the clubb nightly plots this will allow us to test morrison in clubb standalone because clubb standalone lines appear on the sam clubb nightly plots reporter vlarson uwm edu cc dschanen uwm edu resolution verified by v larson ts component post processing summary use the morrison microphysics in the clubb scm lines that appear on the sam clubb nightly plots priority minor keywords sam clubb nightly plots morrison microphysics time milestone owner senkbeil uwm edu type task
| 1
|
41,919
| 9,100,500,632
|
IssuesEvent
|
2019-02-20 08:44:00
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
closed
|
Simplified structure on data in redux store
|
code-quality
|
Simplify the structure of the Redux state where possible.
|
1.0
|
Simplified structure on data in redux store - Simplify the structure of the Redux state where possible.
|
non_process
|
simplified structure on data in redux store simplify the structure of the redux state where possible
| 0
|
91,286
| 10,712,275,842
|
IssuesEvent
|
2019-10-25 08:38:20
|
amu-oss/whattodo
|
https://api.github.com/repos/amu-oss/whattodo
|
opened
|
Mention how to add RSS feeds on different channels
|
Hacktoberfest documentation good first issue help wanted
|
We need to document the behavior of RSS App somewhere on whattodo.readthedocs.io:
Something along the lines of:
[RSS feeds](https://en.wikipedia.org/wiki/RSS) are hugely helpful for following a blog or a podcast. AMU-OSS's slack channel has the RSS app installed which will post a new message on slack whenever there's an update on the feed.
The following commands can be used to manage the RSS feeds linked in any slack channel:
```
/feed subscribe <link_to_RSS_feeds>
/feed list
/feed help
```
These commands can be typed into any slack channel.
|
1.0
|
Mention how to add RSS feeds on different channels - We need to document the behavior of RSS App somewhere on whattodo.readthedocs.io:
Something along the lines of:
[RSS feeds](https://en.wikipedia.org/wiki/RSS) are hugely helpful for following a blog or a podcast. AMU-OSS's slack channel has the RSS app installed which will post a new message on slack whenever there's an update on the feed.
The following commands can be used to manage the RSS feeds linked in any slack channel:
```
/feed subscribe <link_to_RSS_feeds>
/feed list
/feed help
```
These commands can be typed into any slack channel.
|
non_process
|
mention how to add rss feeds on different channels we need to document the behavior of rss app somewhere on whattodo readthedocs io something along the lines of are hugely helpful for following a blog or a podcast amu oss s slack channel has the rss app installed which will post a new message on slack whenever there s an update on the feed the following commands can be used to manage the rss feeds linked in any slack channel feed subscribe feed list feed help these commands can be typed into any slack channel
| 0
|
210,948
| 7,196,950,660
|
IssuesEvent
|
2018-02-05 06:44:56
|
opendatakit/briefcase
|
https://api.github.com/repos/opendatakit/briefcase
|
closed
|
Use `pull_aggregate` instead of `pull-aggregate`
|
high priority
|
#### Software versions
Briefcase v1.9.0 beta
#### Problem description
On the command line interface, we have a new command `pull-aggregate` which uses hyphens. It should use underscores because that's what the other commands use.
#### Steps to reproduce the problem
Build the jar with `./gradlew clean jar` and run it with `java -jar build/libs/ODK\ Briefcase\ v1.9.0-beta.0-18-g277909e.jar -h`.
See the command line help.
#### Expected behavior
I don't love underscores, but in this case, we should be consistent with the existing usage.
|
1.0
|
Use `pull_aggregate` instead of `pull-aggregate` - #### Software versions
Briefcase v1.9.0 beta
#### Problem description
On the command line interface, we have a new command `pull-aggregate` which uses hyphens. It should use underscores because that's what the other commands use.
#### Steps to reproduce the problem
Build the jar with `./gradlew clean jar` and run it with `java -jar build/libs/ODK\ Briefcase\ v1.9.0-beta.0-18-g277909e.jar -h`.
See the command line help.
#### Expected behavior
I don't love underscores, but in this case, we should be consistent with the existing usage.
|
non_process
|
use pull aggregate instead of pull aggregate software versions briefcase beta problem description on the command line interface we have a new command pull aggregate which uses hyphens it should use underscores because that s what the other commands use steps to reproduce the problem build the jar with gradlew clean jar and run it with java jar build libs odk briefcase beta jar h see the command line help expected behavior i don t love underscores but in this case we should be consistent with the existing usage
| 0
|
63,765
| 26,511,920,823
|
IssuesEvent
|
2023-01-18 17:45:57
|
BCDevOps/developer-experience
|
https://api.github.com/repos/BCDevOps/developer-experience
|
closed
|
Make a video on how to apply golden signal to application on the platform with sysdig.
|
ops and shared services
|
**Describe the issue**
Currently, the use of sysdig dashboard for platform application monitoring is limited, I want have Dustin Krysak take a look of the current monitoring standards and give us advice of improvement.
**Additional context**
- [x] initial conversation to introduce what we currently have
- [x] worked dashboard and other monitoring standards based on suggestion
- [ ] get docs based on this experience
- [ ] (maybe) make a course out of it
**How does this benefit the users of our platform?**
**Definition of done**
Identify what will need to happen/be delivered for this to be completely done
|
1.0
|
Make a video on how to apply golden signal to application on the platform with sysdig. - **Describe the issue**
Currently, the use of sysdig dashboard for platform application monitoring is limited, I want have Dustin Krysak take a look of the current monitoring standards and give us advice of improvement.
**Additional context**
- [x] initial conversation to introduce what we currently have
- [x] worked dashboard and other monitoring standards based on suggestion
- [ ] get docs based on this experience
- [ ] (maybe) make a course out of it
**How does this benefit the users of our platform?**
**Definition of done**
Identify what will need to happen/be delivered for this to be completely done
|
non_process
|
make a video on how to apply golden signal to application on the platform with sysdig describe the issue currently the use of sysdig dashboard for platform application monitoring is limited i want have dustin krysak take a look of the current monitoring standards and give us advice of improvement additional context initial conversation to introduce what we currently have worked dashboard and other monitoring standards based on suggestion get docs based on this experience maybe make a course out of it how does this benefit the users of our platform definition of done identify what will need to happen be delivered for this to be completely done
| 0
|
129,744
| 17,868,872,247
|
IssuesEvent
|
2021-09-06 13:00:32
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
can't direct connect when http_proxy/https_proxy environment value is set
|
*as-designed
|
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.58.2 (9adfb3)download from vscode website
- OS Version: opensuse tumbleweed (kde) Linux x64 5.13.4.1-default
Steps to Reproduce:
1. vscode can't directly connect to extensions marketplaces when http_ptoxy or https_proxy is set. no settings could cancel it unless you unset the http_proxy https_proxy, it's inconvenience.
2.
|
1.0
|
can't direct connect when http_proxy/https_proxy environment value is set - <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.58.2 (9adfb3)download from vscode website
- OS Version: opensuse tumbleweed (kde) Linux x64 5.13.4.1-default
Steps to Reproduce:
1. vscode can't directly connect to extensions marketplaces when http_ptoxy or https_proxy is set. no settings could cancel it unless you unset the http_proxy https_proxy, it's inconvenience.
2.
|
non_process
|
can t direct connect when http proxy https proxy environment value is set does this issue occur when all extensions are disabled yes report issue dialog can assist with this vs code version download from vscode website os version opensuse tumbleweed kde linux default steps to reproduce vscode can t directly connect to extensions marketplaces when http ptoxy or https proxy is set no settings could cancel it unless you unset the http proxy https proxy it s inconvenience
| 0
|
34,742
| 6,370,108,154
|
IssuesEvent
|
2017-08-01 13:31:46
|
gimli-org/gimli
|
https://api.github.com/repos/gimli-org/gimli
|
opened
|
Better integratation of C++ API Documentation
|
documentation
|
- [ ] When Python methods are monkeypatched to C++ classes, they should be visible in the documentation.
- [ ] C++ classes and class members should also be integrated in the sphinx documentation (potentially making doxygen obsolete).
|
1.0
|
Better integratation of C++ API Documentation - - [ ] When Python methods are monkeypatched to C++ classes, they should be visible in the documentation.
- [ ] C++ classes and class members should also be integrated in the sphinx documentation (potentially making doxygen obsolete).
|
non_process
|
better integratation of c api documentation when python methods are monkeypatched to c classes they should be visible in the documentation c classes and class members should also be integrated in the sphinx documentation potentially making doxygen obsolete
| 0
|
2,693
| 5,540,428,324
|
IssuesEvent
|
2017-03-22 10:05:17
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
closed
|
Incorrect Error message in 118n file
|
bug comp: activiti-processList i18n
|
**Type of issue:** (check with "[x]")
```
- [ ] New feature request
- [ x ] Bug
- [ ] Support request
- [ ] Documentation
```
**Current behavior:**
When there is an error starting a process I get the error
"Could start new process instance, please check you have permission."
**Expected behavior:**
Please change it to "Could not"
**Component name and version:**
https://github.com/Alfresco/alfresco-ng2-components/blob/master/ng2-components/ng2-activiti-processlist/src/i18n/en.json
**Browser and version:**
all
|
1.0
|
Incorrect Error message in 118n file - **Type of issue:** (check with "[x]")
```
- [ ] New feature request
- [ x ] Bug
- [ ] Support request
- [ ] Documentation
```
**Current behavior:**
When there is an error starting a process I get the error
"Could start new process instance, please check you have permission."
**Expected behavior:**
Please change it to "Could not"
**Component name and version:**
https://github.com/Alfresco/alfresco-ng2-components/blob/master/ng2-components/ng2-activiti-processlist/src/i18n/en.json
**Browser and version:**
all
|
process
|
incorrect error message in file type of issue check with new feature request bug support request documentation current behavior when there is an error starting a process i get the error could start new process instance please check you have permission expected behavior please change it to could not component name and version browser and version all
| 1
|
308,533
| 9,440,304,859
|
IssuesEvent
|
2019-04-14 16:54:53
|
cs2113-ay1819s2-m11-3/main
|
https://api.github.com/repos/cs2113-ay1819s2-m11-3/main
|
closed
|
As a user, I want the system to remove unnecessary flash cards automatically.
|
priority.High type.Enhancement
|
- The system should detect the deadline assigned to the flash cards and remove it automatically.
|
1.0
|
As a user, I want the system to remove unnecessary flash cards automatically. - - The system should detect the deadline assigned to the flash cards and remove it automatically.
|
non_process
|
as a user i want the system to remove unnecessary flash cards automatically the system should detect the deadline assigned to the flash cards and remove it automatically
| 0
|
153,667
| 12,156,929,629
|
IssuesEvent
|
2020-04-25 19:29:51
|
sky-music/sky-python-music-sheet-maker
|
https://api.github.com/repos/sky-music/sky-python-music-sheet-maker
|
closed
|
Allow unicode sharps and flats as synonyms
|
needs-testing
|
Possible approaches:
- add a method in `python/noteparsers/noteparser.py` to replace unicode sharps and flats with (# b)
- replace Unicode sharps and flats in the CHROMATIC SCALE DICT and regexes of each note parser (our sharps `#` and flats `b` are currently hard-coded in there...)
|
1.0
|
Allow unicode sharps and flats as synonyms - Possible approaches:
- add a method in `python/noteparsers/noteparser.py` to replace unicode sharps and flats with (# b)
- replace Unicode sharps and flats in the CHROMATIC SCALE DICT and regexes of each note parser (our sharps `#` and flats `b` are currently hard-coded in there...)
|
non_process
|
allow unicode sharps and flats as synonyms possible approaches add a method in python noteparsers noteparser py to replace unicode sharps and flats with b replace unicode sharps and flats in the chromatic scale dict and regexes of each note parser our sharps and flats b are currently hard coded in there
| 0
|
19,786
| 26,166,501,401
|
IssuesEvent
|
2023-01-01 10:56:43
|
bisq-network/proposals
|
https://api.github.com/repos/bisq-network/proposals
|
reopened
|
Reduce delayed payout timelocks to 5 and 12 days
|
was:stalled a:proposal re:processes
|
> _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://docs.bisq.network/proposals.html)._
<!-- Please do not remove the text above. -->
I propose to reduce the timelock to 5 days for altcoins (for which higher trading window is one day) and 12 days for fiat (which higher trading windows are 8 days for USPMO and 6 days for SEPA)
Currently, when mediation is not working because of unresponsive peers, a bug or any other reason, traders have to wait for 10 or 20 days (alts or fiat) to start arbitration.
I don't think that waiting this long improves security but it creates inconveniences to honest traders.
Opening arbitration just after the timelock is triggered is not mandatory and arbitration should come with a [cost](https://github.com/bisq-network/proposals/issues/292) except for bugs, so traders must not abuse it. They can ask for a new mediation suggestion or generate their own payout (not an easy option) as long as they haven't started arbitration.
Even for long trading window payment methods, traders don't usually open mediation tickets at 8 or 6 days since start of the trade but much earlier, as soon as they see something strange. Thus, there would be time enough for mediators to suggest a payout. I thought that since there's the trading chat, those 12 days could be even reduced to something like 8 or 10 days, but I wanted to be conservative because mediation for fiat is more difficult.
Personally, I haven't used USPMO. It looks like a slow payment method and exceeding 12 days for a trade could happen sometimes, but I've never had a valid reason to wait for more than 4 days for a SEPA transfer.
|
1.0
|
Reduce delayed payout timelocks to 5 and 12 days - > _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://docs.bisq.network/proposals.html)._
<!-- Please do not remove the text above. -->
I propose to reduce the timelock to 5 days for altcoins (for which higher trading window is one day) and 12 days for fiat (which higher trading windows are 8 days for USPMO and 6 days for SEPA)
Currently, when mediation is not working because of unresponsive peers, a bug or any other reason, traders have to wait for 10 or 20 days (alts or fiat) to start arbitration.
I don't think that waiting this long improves security but it creates inconveniences to honest traders.
Opening arbitration just after the timelock is triggered is not mandatory and arbitration should come with a [cost](https://github.com/bisq-network/proposals/issues/292) except for bugs, so traders must not abuse it. They can ask for a new mediation suggestion or generate their own payout (not an easy option) as long as they haven't started arbitration.
Even for long trading window payment methods, traders don't usually open mediation tickets at 8 or 6 days since start of the trade but much earlier, as soon as they see something strange. Thus, there would be time enough for mediators to suggest a payout. I thought that since there's the trading chat, those 12 days could be even reduced to something like 8 or 10 days, but I wanted to be conservative because mediation for fiat is more difficult.
Personally, I haven't used USPMO. It looks like a slow payment method and exceeding 12 days for a trade could happen sometimes, but I've never had a valid reason to wait for more than 4 days for a SEPA transfer.
|
process
|
reduce delayed payout timelocks to and days this is a bisq network proposal please familiarize yourself with the i propose to reduce the timelock to days for altcoins for which higher trading window is one day and days for fiat which higher trading windows are days for uspmo and days for sepa currently when mediation is not working because of unresponsive peers a bug or any other reason traders have to wait for or days alts or fiat to start arbitration i don t think that waiting this long improves security but it creates inconveniences to honest traders opening arbitration just after the timelock is triggered is not mandatory and arbitration should come with a except for bugs so traders must not abuse it they can ask for a new mediation suggestion or generate their own payout not an easy option as long as they haven t started arbitration even for long trading window payment methods traders don t usually open mediation tickets at or days since start of the trade but much earlier as soon as they see something strange thus there would be time enough for mediators to suggest a payout i thought that since there s the trading chat those days could be even reduced to something like or days but i wanted to be conservative because mediation for fiat is more difficult personally i haven t used uspmo it looks like a slow payment method and exceeding days for a trade could happen sometimes but i ve never had a valid reason to wait for more than days for a sepa transfer
| 1
|
16,976
| 22,338,178,815
|
IssuesEvent
|
2022-06-14 20:46:06
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[processor/transform] Refactor function structure
|
priority:p2 proc: transformprocessor
|
[The transform processor currently uses reflection to call functions.](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/e2d24b5a36ce989b58effaf8ea89d6474f6795a0/processor/transformprocessor/internal/common/functions.go#L143) Reflection is working, but is complicated to maintain and adapt when new functions are added.
This issue will track the following steps to removing reflection and improving function add process:
- [x] Update reflection in `NewFunctionCall`. Any new functions should be able to access any parameters from `Value`
- [x] Refactor functions into their own files/modules. This is how [log-collection](https://github.com/open-telemetry/opentelemetry-log-collection/tree/main/operator) is organized, and I think it helps organize the code. If we continue to put all shared functions in `common/functions.go`, the file will become huge.
- [x] Think about how to refactor unit tests for functions. Right now each time a function is added it will need test scenarios in `common /functions_test.go` and each signal's `functions_test.go`. This process has a lot of room for missed tests. Want to figure out if there is a better way to handle testing functions for all the different scenarios.
- [ ] Add a CONTRIBUTING.md for how to add new functions and how to add new features to the parser.
|
1.0
|
[processor/transform] Refactor function structure - [The transform processor currently uses reflection to call functions.](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/e2d24b5a36ce989b58effaf8ea89d6474f6795a0/processor/transformprocessor/internal/common/functions.go#L143) Reflection is working, but is complicated to maintain and adapt when new functions are added.
This issue will track the following steps to removing reflection and improving function add process:
- [x] Update reflection in `NewFunctionCall`. Any new functions should be able to access any parameters from `Value`
- [x] Refactor functions into their own files/modules. This is how [log-collection](https://github.com/open-telemetry/opentelemetry-log-collection/tree/main/operator) is organized, and I think it helps organize the code. If we continue to put all shared functions in `common/functions.go`, the file will become huge.
- [x] Think about how to refactor unit tests for functions. Right now each time a function is added it will need test scenarios in `common /functions_test.go` and each signal's `functions_test.go`. This process has a lot of room for missed tests. Want to figure out if there is a better way to handle testing functions for all the different scenarios.
- [ ] Add a CONTRIBUTING.md for how to add new functions and how to add new features to the parser.
|
process
|
refactor function structure reflection is working but is complicated to maintain and adapt when new functions are added this issue will track the following steps to removing reflection and improving function add process update reflection in newfunctioncall any new functions should be able to access any parameters from value refactor functions into their own files modules this is how is organized and i think it helps organize the code if we continue to put all shared functions in common functions go the file will become huge think about how to refactor unit tests for functions right now each time a function is added it will need test scenarios in common functions test go and each signal s functions test go this process has a lot of room for missed tests want to figure out if there is a better way to handle testing functions for all the different scenarios add a contributing md for how to add new functions and how to add new features to the parser
| 1
|
336,559
| 24,504,203,195
|
IssuesEvent
|
2022-10-10 15:01:28
|
quarkusio/quarkus
|
https://api.github.com/repos/quarkusio/quarkus
|
opened
|
Docs: Bring summaries from Quarkus website guide_* file into docs
|
area/documentation area/housekeeping
|
### Description
Goal: Generate all docs attributes from content in the quarkus dir.
To facilitate this, we need the manually maintained summaries from the [guide data for the main branch of the website](https://github.com/quarkusio/quarkusio.github.io/blob/develop/_data/guides-main.yaml) copied into the associated *.adoc file as a `summary` attribute.
### Implementation ideas
_No response_
|
1.0
|
Docs: Bring summaries from Quarkus website guide_* file into docs - ### Description
Goal: Generate all docs attributes from content in the quarkus dir.
To facilitate this, we need the manually maintained summaries from the [guide data for the main branch of the website](https://github.com/quarkusio/quarkusio.github.io/blob/develop/_data/guides-main.yaml) copied into the associated *.adoc file as a `summary` attribute.
### Implementation ideas
_No response_
|
non_process
|
docs bring summaries from quarkus website guide file into docs description goal generate all docs attributes from content in the quarkus dir to facilitate this we need the manually maintained summaries from the copied into the associated adoc file as a summary attribute implementation ideas no response
| 0
|
3,428
| 6,529,574,501
|
IssuesEvent
|
2017-08-30 12:14:44
|
RadeonOpenCompute/ROCm-OpenCL-Driver
|
https://api.github.com/repos/RadeonOpenCompute/ROCm-OpenCL-Driver
|
closed
|
[in-process] Setting options to its default values.
|
difficulty:C_Hard feature [in-process]
|
LLVM/Clang options being set once stay the same from one compilation to another (if not being reset manually). In general it might lead and actually leads to wrong compilation results and even failures and crashes. The needed functionality is to set all the options to its default values on every Compilation call.
The best place is seen in Clang's CommandLine.h, where such functionallity is missing. For instance, virtual method llvm::cl::Option::setDefault() is worth implementing in order to be able to set all the options to its default values in user's code without actually knowing all these options:
for (auto SC : llvm::cl::getRegisteredSubcommands()) {
for (auto &OM : SC->OptionsMap) {
llvm::cl::Option *O = OM.second;
// missing functionality in Clang's CommandLine.h
O->setDefault();
}
}
|
1.0
|
[in-process] Setting options to its default values. - LLVM/Clang options being set once stay the same from one compilation to another (if not being reset manually). In general it might lead and actually leads to wrong compilation results and even failures and crashes. The needed functionality is to set all the options to its default values on every Compilation call.
The best place is seen in Clang's CommandLine.h, where such functionallity is missing. For instance, virtual method llvm::cl::Option::setDefault() is worth implementing in order to be able to set all the options to its default values in user's code without actually knowing all these options:
for (auto SC : llvm::cl::getRegisteredSubcommands()) {
for (auto &OM : SC->OptionsMap) {
llvm::cl::Option *O = OM.second;
// missing functionality in Clang's CommandLine.h
O->setDefault();
}
}
|
process
|
setting options to its default values llvm clang options being set once stay the same from one compilation to another if not being reset manually in general it might lead and actually leads to wrong compilation results and even failures and crashes the needed functionality is to set all the options to its default values on every compilation call the best place is seen in clang s commandline h where such functionallity is missing for instance virtual method llvm cl option setdefault is worth implementing in order to be able to set all the options to its default values in user s code without actually knowing all these options for auto sc llvm cl getregisteredsubcommands for auto om sc optionsmap llvm cl option o om second missing functionality in clang s commandline h o setdefault
| 1
|
5,147
| 7,927,687,224
|
IssuesEvent
|
2018-07-06 08:56:02
|
symfony/symfony
|
https://api.github.com/repos/symfony/symfony
|
closed
|
Process package on windows
|
Bug Process Status: Waiting feedback
|
Hi there. I'm using symfony/process 4.1.0 on Windows 10, PHP 7.2 and trying to start a process in background but seems that does`t work.
I have to run this command: ```start /B php artisan queue:async 15 async > NUL```
I see that Process.php class get me this command: ```cmd /V:ON /E:ON /D /C (start /B php artisan queue:async 15 async > NUL) 1>"C:\Apache24\tmp\sf_proc_00.out" 2>"C:\Apache24\tmp\sf_proc_00.err"```
If i manualy run this command, will not work because as i see, the command should be market with extra double quotes before and after parenthesis like: ```cmd /V:ON /E:ON /D /C "(start /B php artisan queue:async 15 async > NUL)" 1>"C:\Apache24\tmp\sf_proc_00.out" 2>"C:\Apache24\tmp\sf_proc_00.err"```
Running new command from cli will execute that process... but not directly from web server...
The problem is that even though I have modified the Process.php class, it still does not launch the process.
Also with no luck :) i tryed with vanilla php: ```pclose(popen("start /B php artisan queue:async 15 async", "r"))```
After a wile i get: ```client.ERROR: The process "start /B php artisan queue:async 15 async > NUL" exceeded the timeout of 60 seconds. ... \\vendor\\symfony\\process\\Process.php:1158```
What else can do?
Thanks!
|
1.0
|
Process package on windows - Hi there. I'm using symfony/process 4.1.0 on Windows 10, PHP 7.2 and trying to start a process in background but seems that does`t work.
I have to run this command: ```start /B php artisan queue:async 15 async > NUL```
I see that Process.php class get me this command: ```cmd /V:ON /E:ON /D /C (start /B php artisan queue:async 15 async > NUL) 1>"C:\Apache24\tmp\sf_proc_00.out" 2>"C:\Apache24\tmp\sf_proc_00.err"```
If i manualy run this command, will not work because as i see, the command should be market with extra double quotes before and after parenthesis like: ```cmd /V:ON /E:ON /D /C "(start /B php artisan queue:async 15 async > NUL)" 1>"C:\Apache24\tmp\sf_proc_00.out" 2>"C:\Apache24\tmp\sf_proc_00.err"```
Running new command from cli will execute that process... but not directly from web server...
The problem is that even though I have modified the Process.php class, it still does not launch the process.
Also with no luck :) i tryed with vanilla php: ```pclose(popen("start /B php artisan queue:async 15 async", "r"))```
After a wile i get: ```client.ERROR: The process "start /B php artisan queue:async 15 async > NUL" exceeded the timeout of 60 seconds. ... \\vendor\\symfony\\process\\Process.php:1158```
What else can do?
Thanks!
|
process
|
process package on windows hi there i m using symfony process on windows php and trying to start a process in background but seems that does t work i have to run this command start b php artisan queue async async nul i see that process php class get me this command cmd v on e on d c start b php artisan queue async async nul c tmp sf proc out c tmp sf proc err if i manualy run this command will not work because as i see the command should be market with extra double quotes before and after parenthesis like cmd v on e on d c start b php artisan queue async async nul c tmp sf proc out c tmp sf proc err running new command from cli will execute that process but not directly from web server the problem is that even though i have modified the process php class it still does not launch the process also with no luck i tryed with vanilla php pclose popen start b php artisan queue async async r after a wile i get client error the process start b php artisan queue async async nul exceeded the timeout of seconds vendor symfony process process php what else can do thanks
| 1
|
91,213
| 26,321,201,906
|
IssuesEvent
|
2023-01-10 00:01:10
|
KhronosGroup/Vulkan-Headers
|
https://api.github.com/repos/KhronosGroup/Vulkan-Headers
|
closed
|
Setting target_include_directories with generator expression breaks configuration of layers
|
build
|
**Describe the bug**
`Vulkan-Headers` and `Vulkan-ValidationLayers` are included in my project as submodules via `add_subdirectory`.
During CMake configuration step of `Vulkan-ValidationLayers` I get an error at [line](https://github.com/KhronosGroup/Vulkan-ValidationLayers/blob/4050be7c1960ae0aec00b421bb47be8d67cabe63/layers/CMakeLists.txt#L25) where `VulkanRegistry_DIR` get `$<BUILD_INTERFACE:/home/anatoliy/Projects/sah_kd_tree/external/Vulkan-Headers/registry>`. As you can see there is unevaluated generator expression, that comes directly from the [line](https://github.com/KhronosGroup/Vulkan-Headers/blob/24115c70bea4bb91e7bd12ef093fbdf6aa9a5222/CMakeLists.txt#L56). It is expected, because *generator expression* evaluated at step when *generator works*, not at *configuration step*. Therefore `EXISTS` failed.
**Expected behavior**
Project configured.
|
1.0
|
Setting target_include_directories with generator expression breaks configuration of layers - **Describe the bug**
`Vulkan-Headers` and `Vulkan-ValidationLayers` are included in my project as submodules via `add_subdirectory`.
During CMake configuration step of `Vulkan-ValidationLayers` I get an error at [line](https://github.com/KhronosGroup/Vulkan-ValidationLayers/blob/4050be7c1960ae0aec00b421bb47be8d67cabe63/layers/CMakeLists.txt#L25) where `VulkanRegistry_DIR` get `$<BUILD_INTERFACE:/home/anatoliy/Projects/sah_kd_tree/external/Vulkan-Headers/registry>`. As you can see there is unevaluated generator expression, that comes directly from the [line](https://github.com/KhronosGroup/Vulkan-Headers/blob/24115c70bea4bb91e7bd12ef093fbdf6aa9a5222/CMakeLists.txt#L56). It is expected, because *generator expression* evaluated at step when *generator works*, not at *configuration step*. Therefore `EXISTS` failed.
**Expected behavior**
Project configured.
|
non_process
|
setting target include directories with generator expression breaks configuration of layers describe the bug vulkan headers and vulkan validationlayers are included in my project as submodules via add subdirectory during cmake configuration step of vulkan validationlayers i get an error at where vulkanregistry dir get as you can see there is unevaluated generator expression that comes directly from the it is expected because generator expression evaluated at step when generator works not at configuration step therefore exists failed expected behavior project configured
| 0
|
12,167
| 14,741,584,034
|
IssuesEvent
|
2021-01-07 10:50:54
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Merge Account E0423 with E1193 and any associated charges
|
anc-process anp-1 ant-support
|
In GitLab by @kdjstudios on Jan 28, 2019, 12:47
**Submitted by:** "Kimberly Gagner" <kimberly.gagner@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-01-28-91396/conversation
**Server:** Internal
**Client/Site:** Billerica
**Account:** Mutliple
**Issue:**
Please merge account E0423 to E1193 as they are the same owner and will now be sharing the usage.
|
1.0
|
Merge Account E0423 with E1193 and any associated charges - In GitLab by @kdjstudios on Jan 28, 2019, 12:47
**Submitted by:** "Kimberly Gagner" <kimberly.gagner@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-01-28-91396/conversation
**Server:** Internal
**Client/Site:** Billerica
**Account:** Mutliple
**Issue:**
Please merge account E0423 to E1193 as they are the same owner and will now be sharing the usage.
|
process
|
merge account with and any associated charges in gitlab by kdjstudios on jan submitted by kimberly gagner helpdesk server internal client site billerica account mutliple issue please merge account to as they are the same owner and will now be sharing the usage
| 1
|
15,121
| 18,852,431,668
|
IssuesEvent
|
2021-11-11 23:01:22
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Warning: a recent release failed
|
type: process
|
The following release PRs may have failed:
* #16035
* #16036
* #16037
* #16038
* #16042
* #16043
* #16044
* #16045
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #16035
* #16036
* #16037
* #16038
* #16042
* #16043
* #16044
* #16045
|
process
|
warning a recent release failed the following release prs may have failed
| 1
|
6,954
| 10,113,939,298
|
IssuesEvent
|
2019-07-30 17:57:39
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
[ButtonBar] Evaluate how UIKit handles pure Swift class target/actions in extensions
|
[ButtonBar] type:Process
|
Example setup:
```swift
class SomeObject {
@objc func someEvent() {
}
}
let object = SomeObject()
self.navigationItem.rightBarButtonItem =
UIBarButtonItem(title: "Right", style: .done, target: object, action: #selector(SomeObject.someEvent))
```
Tap the button in an extension.
We need to determine what the expected behavior is.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/117176528](http://b/117176528)
|
1.0
|
[ButtonBar] Evaluate how UIKit handles pure Swift class target/actions in extensions - Example setup:
```swift
class SomeObject {
@objc func someEvent() {
}
}
let object = SomeObject()
self.navigationItem.rightBarButtonItem =
UIBarButtonItem(title: "Right", style: .done, target: object, action: #selector(SomeObject.someEvent))
```
Tap the button in an extension.
We need to determine what the expected behavior is.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/117176528](http://b/117176528)
|
process
|
evaluate how uikit handles pure swift class target actions in extensions example setup swift class someobject objc func someevent let object someobject self navigationitem rightbarbuttonitem uibarbuttonitem title right style done target object action selector someobject someevent tap the button in an extension we need to determine what the expected behavior is internal data associated internal bug
| 1
|
20,788
| 27,527,240,934
|
IssuesEvent
|
2023-03-06 19:02:10
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Release 6.1.0 - March 2023
|
P1 type: process release team-OSS not stale
|
# Status of Bazel 6.1.0
- Expected release date: 2023-03-06
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/46)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 6.1, simply send a PR against the `release-6.1.0` branch.
**Task list:**
- [x] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. -->
- [x] Send the release announcement for review
- [x] Push the release and notify package maintainers
- ~Update the documentation~
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Release 6.1.0 - March 2023 - # Status of Bazel 6.1.0
- Expected release date: 2023-03-06
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/46)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 6.1, simply send a PR against the `release-6.1.0` branch.
**Task list:**
- [x] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. -->
- [x] Send the release announcement for review
- [x] Push the release and notify package maintainers
- ~Update the documentation~
- [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
release march status of bazel expected release date to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list create send the release announcement for review push the release and notify package maintainers update the documentation update the
| 1
|
21,535
| 29,833,067,634
|
IssuesEvent
|
2023-06-18 14:04:18
|
parca-dev/parca-agent
|
https://api.github.com/repos/parca-dev/parca-agent
|
closed
|
Resource usage of rework process info
|
P0 optimization area/process-discovery
|
[metrics link](https://demo.pyrra.dev/prometheus/graph?g0.expr=process_resident_memory_bytes%7Bcontainer%3D%22parca-agent%22%7D&g0.tab=0&g0.stacked=0&g0.show_exemplars=0&g0.range_input=2d&g1.expr=process_open_fds%7Bcontainer%3D%22parca-agent%22%7D&g1.tab=0&g1.stacked=0&g1.show_exemplars=0&g1.range_input=2d&g2.expr=rate(process_cpu_seconds_total%7Bcontainer%3D%22parca-agent%22%7D%5B5m%5D)&g2.tab=0&g2.stacked=0&g2.show_exemplars=0&g2.range_input=12h)
**one pod uses significantly more CPU**
<img width="1217" alt="image" src="https://github.com/parca-dev/parca-agent/assets/959128/f1f9480d-0419-4171-97e4-b42be05297c2">
**one pod used 3x as much mem**
<img width="607" alt="image" src="https://github.com/parca-dev/parca-agent/assets/959128/215e17ed-7a6a-41d0-a74c-09e0ba6bfebf">
**FD don't seem to be closed quickly after processing the debug infos**
<img width="1239" alt="image" src="https://github.com/parca-dev/parca-agent/assets/959128/2574db3a-f484-475b-b95a-cbb9976d2e01">
|
1.0
|
Resource usage of rework process info - [metrics link](https://demo.pyrra.dev/prometheus/graph?g0.expr=process_resident_memory_bytes%7Bcontainer%3D%22parca-agent%22%7D&g0.tab=0&g0.stacked=0&g0.show_exemplars=0&g0.range_input=2d&g1.expr=process_open_fds%7Bcontainer%3D%22parca-agent%22%7D&g1.tab=0&g1.stacked=0&g1.show_exemplars=0&g1.range_input=2d&g2.expr=rate(process_cpu_seconds_total%7Bcontainer%3D%22parca-agent%22%7D%5B5m%5D)&g2.tab=0&g2.stacked=0&g2.show_exemplars=0&g2.range_input=12h)
**one pod uses significantly more CPU**
<img width="1217" alt="image" src="https://github.com/parca-dev/parca-agent/assets/959128/f1f9480d-0419-4171-97e4-b42be05297c2">
**one pod used 3x as much mem**
<img width="607" alt="image" src="https://github.com/parca-dev/parca-agent/assets/959128/215e17ed-7a6a-41d0-a74c-09e0ba6bfebf">
**FD don't seem to be closed quickly after processing the debug infos**
<img width="1239" alt="image" src="https://github.com/parca-dev/parca-agent/assets/959128/2574db3a-f484-475b-b95a-cbb9976d2e01">
|
process
|
resource usage of rework process info one pod uses significantly more cpu img width alt image src one pod used as much mem img width alt image src fd don t seem to be closed quickly after processing the debug infos img width alt image src
| 1
|
2,413
| 5,198,791,780
|
IssuesEvent
|
2017-01-23 19:06:46
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [FR] MÉLENCHON - LA CASTE ET SES MARIONNETTES
|
Language: French Process: Someone is working on this issue Process: [5] Review (2) in progress
|
# Video title
MÉLENCHON - LA CASTE ET SES MARIONNETTES
# URL
https://www.youtube.com/watch?v=3JG3dL4TsH0&feature=push-u-sub&attr_tag=PJqLaS43vZ8-6
# Youtube subtitles language
Français
# Duration
1:40
# Subtitles URL
https://www.youtube.com/timedtext_editor?v=3JG3dL4TsH0&tab=captions&ref=player&action_mde_edit_form=1&bl=vmp&ui=hd&lang=fr
|
2.0
|
[subtitles] [FR] MÉLENCHON - LA CASTE ET SES MARIONNETTES - # Video title
MÉLENCHON - LA CASTE ET SES MARIONNETTES
# URL
https://www.youtube.com/watch?v=3JG3dL4TsH0&feature=push-u-sub&attr_tag=PJqLaS43vZ8-6
# Youtube subtitles language
Français
# Duration
1:40
# Subtitles URL
https://www.youtube.com/timedtext_editor?v=3JG3dL4TsH0&tab=captions&ref=player&action_mde_edit_form=1&bl=vmp&ui=hd&lang=fr
|
process
|
mélenchon la caste et ses marionnettes video title mélenchon la caste et ses marionnettes url youtube subtitles language français duration subtitles url
| 1
|
196,345
| 6,927,076,465
|
IssuesEvent
|
2017-11-30 21:28:19
|
AZMAG/map-ATP
|
https://api.github.com/repos/AZMAG/map-ATP
|
closed
|
Add the icons for each of the types of symbols in the dropdown list
|
Priority: Low
|
Adding these will help the user more easily understand what type they would like to choose.
|
1.0
|
Add the icons for each of the types of symbols in the dropdown list - Adding these will help the user more easily understand what type they would like to choose.
|
non_process
|
add the icons for each of the types of symbols in the dropdown list adding these will help the user more easily understand what type they would like to choose
| 0
|
14,697
| 17,866,761,513
|
IssuesEvent
|
2021-09-06 10:22:27
|
corona-warn-app/cwa-wishlist
|
https://api.github.com/repos/corona-warn-app/cwa-wishlist
|
closed
|
"Warn others" function should be deactivated if registered RAT is positive, but registered PCR test is negative
|
enhancement mirrored-to-jira declined Test/Share process
|
## Current Implementation
If a user takes a RAT, registers it in the CWA & it returns positive, it's quite likely that they first go and take a PCR-test before warning others. If the PCR-test is also registered in the app and it then returns negative, the user still has the option to warn others by using the button of the RAT. This does not make sense, since RAT test results are often debunked by PCR-tests. This is also mentioned in the Corona-Warn-App "Machen Sie einen PCR-Test um dieses Ergebnis zu verifizieren".
## Suggested Enhancement
Deactivate the "Warn others" button if there is one negative PCR test in the app and one positive RAT in the app.
## Expected Benefits
Less false warnings.
## Screenshot
Here's a screenshot of a friend, which was in this situation just a few days ago:

----
Internal Tracking-ID: [EXPOSUREAPP-8993](https://jira-ibs.wbs.net.sap/browse/EXPOSUREAPP-8993)
|
1.0
|
"Warn others" function should be deactivated if registered RAT is positive, but registered PCR test is negative - ## Current Implementation
If a user takes a RAT, registers it in the CWA & it returns positive, it's quite likely that they first go and take a PCR-test before warning others. If the PCR-test is also registered in the app and it then returns negative, the user still has the option to warn others by using the button of the RAT. This does not make sense, since RAT test results are often debunked by PCR-tests. This is also mentioned in the Corona-Warn-App "Machen Sie einen PCR-Test um dieses Ergebnis zu verifizieren".
## Suggested Enhancement
Deactivate the "Warn others" button if there is one negative PCR test in the app and one positive RAT in the app.
## Expected Benefits
Less false warnings.
## Screenshot
Here's a screenshot of a friend, which was in this situation just a few days ago:

----
Internal Tracking-ID: [EXPOSUREAPP-8993](https://jira-ibs.wbs.net.sap/browse/EXPOSUREAPP-8993)
|
process
|
warn others function should be deactivated if registered rat is positive but registered pcr test is negative current implementation if a user takes a rat registers it in the cwa it returns positive it s quite likely that they first go and take a pcr test before warning others if the pcr test is also registered in the app and it then returns negative the user still has the option to warn others by using the button of the rat this does not make sense since rat test results are often debunked by pcr tests this is also mentioned in the corona warn app machen sie einen pcr test um dieses ergebnis zu verifizieren suggested enhancement deactivate the warn others button if there is one negative pcr test in the app and one positive rat in the app expected benefits less false warnings screenshot here s a screenshot of a friend which was in this situation just a few days ago internal tracking id
| 1
|
61,874
| 3,155,449,100
|
IssuesEvent
|
2015-09-17 08:54:41
|
OCHA-DAP/hdx-ckan
|
https://api.github.com/repos/OCHA-DAP/hdx-ckan
|
closed
|
Dataset edit: show all lines of resource description when editing.
|
Priority-Medium
|
Show all lines of the resource description on the resource list page. This make it easier to manage descriptions for datasets with many resources. I don't have a great example of this at my fingertips, but you get the idea from this one: https://data.hdx.rwlabs.org/dataset/resources/administrative-boundaries-cod. This remains low priority.
|
1.0
|
Dataset edit: show all lines of resource description when editing. - Show all lines of the resource description on the resource list page. This make it easier to manage descriptions for datasets with many resources. I don't have a great example of this at my fingertips, but you get the idea from this one: https://data.hdx.rwlabs.org/dataset/resources/administrative-boundaries-cod. This remains low priority.
|
non_process
|
dataset edit show all lines of resource description when editing show all lines of the resource description on the resource list page this make it easier to manage descriptions for datasets with many resources i don t have a great example of this at my fingertips but you get the idea from this one this remains low priority
| 0
|
722,757
| 24,873,500,982
|
IssuesEvent
|
2022-10-27 17:02:00
|
ncssar/radiolog
|
https://api.github.com/repos/ncssar/radiolog
|
closed
|
toggle show/hide for simple dialogs (help, fsFilter, clueLog)
|
enhancement Priority:Low
|
clicking the button on the main dialog that raises each of these other dialogs, should also lower the dialog if already shown.
Right now, clicking the help button or fsFilter button again does nothing, and, clicking the clueLog button again actually raises the main dialog but leaves the clueLogDialog shown in the background - not sure how much of a problem that is, but, should be cleaned up.
|
1.0
|
toggle show/hide for simple dialogs (help, fsFilter, clueLog) - clicking the button on the main dialog that raises each of these other dialogs, should also lower the dialog if already shown.
Right now, clicking the help button or fsFilter button again does nothing, and, clicking the clueLog button again actually raises the main dialog but leaves the clueLogDialog shown in the background - not sure how much of a problem that is, but, should be cleaned up.
|
non_process
|
toggle show hide for simple dialogs help fsfilter cluelog clicking the button on the main dialog that raises each of these other dialogs should also lower the dialog if already shown right now clicking the help button or fsfilter button again does nothing and clicking the cluelog button again actually raises the main dialog but leaves the cluelogdialog shown in the background not sure how much of a problem that is but should be cleaned up
| 0
|
90,338
| 26,050,962,940
|
IssuesEvent
|
2022-12-22 18:37:21
|
apple/swift
|
https://api.github.com/repos/apple/swift
|
closed
|
CMake error when running `build-script` with `--xcode`
|
bug build-script generated xcode project build build error
|
<!--
This repository tracks issues related to the implementation of the Swift
compiler, standard library, runtime, and tools that provide IDE support for
Swift (e.g. code completion). If the bug relates to the implementation of a
proprietary (closed-source) Apple framework such as UIKit, SwiftUI, Combine,
etc., please report it to https://feedbackassistant.apple.com instead.
-->
**Description**
<!-- Describe clearly and concisely what the bug is. -->
Build script fails on running:
```
utils/build-script --skip-build-benchmarks --skip-ios \
--skip-watchos --skip-tvos --swift-darwin-supported-archs "$(uname -m)" \
--swift-disable-dead-stripping --release-debuginfo --xcode --clean
```
This issue appears to be similar to the issue discussed in #62205
**Steps to reproduce**
<!--
Explain how to reproduce the problem (in steps if seen fit) and include either
an inline test case (preferred) or a project that reproduces it. Consider
reducing the sample to the smallest amount of code possible — a smaller test
case is easier to reason about and more appealing to сontributors.
-->
1. Checkout main: `utils/update-checkout --scheme main` (current revision: 244ca4e2426260e7b9161c2fd6534dc350983cdf)
2. Run the build script using the following command:
```
utils/build-script --skip-build-benchmarks --skip-ios \
--skip-watchos --skip-tvos --swift-darwin-supported-archs "$(uname -m)" \
--swift-disable-dead-stripping --release-debuginfo --xcode --clean
```
There following errors occur, 4 times each:
```
CMake Error in lib/ASTGen/CMakeLists.txt:
Imported target "SwiftSyntax::SwiftBasicFormat" includes non-existent path
"/Users/whiteio/Development/swift-project/build/Xcode-RelWithDebInfoAssert/earlyswiftsyntax-macosx-arm64/lib/swift/host"
in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include:
* The path was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and references files it does not
provide.
```
```
CMake Error in lib/Parse/CMakeLists.txt:
Imported target "SwiftSyntax::SwiftBasicFormat" includes non-existent path
"/Users/whiteio/Development/swift-project/build/Xcode-RelWithDebInfoAssert/earlyswiftsyntax-macosx-arm64/lib/swift/host"
in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include:
* The path was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and references files it does not
provide.
```
**Expected behavior**
It should successfully build and create the Xcode projects.
**Environment**
swift-driver version: 1.62.15
Apple Swift version 5.7.1 (swiftlang-5.7.1.135.3 clang-1400.0.29.51)
Target: arm64-apple-macosx12.0
Xcode 14.1
Build version 14B47b
|
3.0
|
CMake error when running `build-script` with `--xcode` - <!--
This repository tracks issues related to the implementation of the Swift
compiler, standard library, runtime, and tools that provide IDE support for
Swift (e.g. code completion). If the bug relates to the implementation of a
proprietary (closed-source) Apple framework such as UIKit, SwiftUI, Combine,
etc., please report it to https://feedbackassistant.apple.com instead.
-->
**Description**
<!-- Describe clearly and concisely what the bug is. -->
Build script fails on running:
```
utils/build-script --skip-build-benchmarks --skip-ios \
--skip-watchos --skip-tvos --swift-darwin-supported-archs "$(uname -m)" \
--swift-disable-dead-stripping --release-debuginfo --xcode --clean
```
This issue appears to be similar to the issue discussed in #62205
**Steps to reproduce**
<!--
Explain how to reproduce the problem (in steps if seen fit) and include either
an inline test case (preferred) or a project that reproduces it. Consider
reducing the sample to the smallest amount of code possible — a smaller test
case is easier to reason about and more appealing to сontributors.
-->
1. Checkout main: `utils/update-checkout --scheme main` (current revision: 244ca4e2426260e7b9161c2fd6534dc350983cdf)
2. Run the build script using the following command:
```
utils/build-script --skip-build-benchmarks --skip-ios \
--skip-watchos --skip-tvos --swift-darwin-supported-archs "$(uname -m)" \
--swift-disable-dead-stripping --release-debuginfo --xcode --clean
```
There following errors occur, 4 times each:
```
CMake Error in lib/ASTGen/CMakeLists.txt:
Imported target "SwiftSyntax::SwiftBasicFormat" includes non-existent path
"/Users/whiteio/Development/swift-project/build/Xcode-RelWithDebInfoAssert/earlyswiftsyntax-macosx-arm64/lib/swift/host"
in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include:
* The path was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and references files it does not
provide.
```
```
CMake Error in lib/Parse/CMakeLists.txt:
Imported target "SwiftSyntax::SwiftBasicFormat" includes non-existent path
"/Users/whiteio/Development/swift-project/build/Xcode-RelWithDebInfoAssert/earlyswiftsyntax-macosx-arm64/lib/swift/host"
in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include:
* The path was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and references files it does not
provide.
```
**Expected behavior**
It should successfully build and create the Xcode projects.
**Environment**
swift-driver version: 1.62.15
Apple Swift version 5.7.1 (swiftlang-5.7.1.135.3 clang-1400.0.29.51)
Target: arm64-apple-macosx12.0
Xcode 14.1
Build version 14B47b
|
non_process
|
cmake error when running build script with xcode this repository tracks issues related to the implementation of the swift compiler standard library runtime and tools that provide ide support for swift e g code completion if the bug relates to the implementation of a proprietary closed source apple framework such as uikit swiftui combine etc please report it to instead description build script fails on running utils build script skip build benchmarks skip ios skip watchos skip tvos swift darwin supported archs uname m swift disable dead stripping release debuginfo xcode clean this issue appears to be similar to the issue discussed in steps to reproduce explain how to reproduce the problem in steps if seen fit and include either an inline test case preferred or a project that reproduces it consider reducing the sample to the smallest amount of code possible — a smaller test case is easier to reason about and more appealing to сontributors checkout main utils update checkout scheme main current revision run the build script using the following command utils build script skip build benchmarks skip ios skip watchos skip tvos swift darwin supported archs uname m swift disable dead stripping release debuginfo xcode clean there following errors occur times each cmake error in lib astgen cmakelists txt imported target swiftsyntax swiftbasicformat includes non existent path users whiteio development swift project build xcode relwithdebinfoassert earlyswiftsyntax macosx lib swift host in its interface include directories possible reasons include the path was deleted renamed or moved to another location an install or uninstall procedure did not complete successfully the installation package was faulty and references files it does not provide cmake error in lib parse cmakelists txt imported target swiftsyntax swiftbasicformat includes non existent path users whiteio development swift project build xcode relwithdebinfoassert earlyswiftsyntax macosx lib swift host in its interface include directories possible reasons include the path was deleted renamed or moved to another location an install or uninstall procedure did not complete successfully the installation package was faulty and references files it does not provide expected behavior it should successfully build and create the xcode projects environment swift driver version apple swift version swiftlang clang target apple xcode build version
| 0
|
326,330
| 9,955,246,721
|
IssuesEvent
|
2019-07-05 10:29:05
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.cvs.com - see bug description
|
browser-firefox-mobile engine-gecko priority-normal
|
<!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
**URL**: https://www.cvs.com/webcontent/fast/v1/xid-campaign-redirector/#/?xid=mLPyeV2
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: clicked on a link in text message but the site doesn't load it just keeps trying to load but works fine in Firefox on Android on the sand phone.
**Steps to Reproduce**:
but works fine in Firefox on Android on the sand phone.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.cvs.com - see bug description - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
**URL**: https://www.cvs.com/webcontent/fast/v1/xid-campaign-redirector/#/?xid=mLPyeV2
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: clicked on a link in text message but the site doesn't load it just keeps trying to load but works fine in Firefox on Android on the sand phone.
**Steps to Reproduce**:
but works fine in Firefox on Android on the sand phone.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description clicked on a link in text message but the site doesn t load it just keeps trying to load but works fine in firefox on android on the sand phone steps to reproduce but works fine in firefox on android on the sand phone browser configuration none from with ❤️
| 0
|
15,277
| 19,268,939,877
|
IssuesEvent
|
2021-12-10 01:27:31
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
opened
|
Test failure System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
|
arch-x86 area-System.ServiceProcess os-windows arch-x64
|
Run: [runtime-libraries-coreclr outerloop 20211209.3](https://dev.azure.com/dnceng/public/_build/results?buildId=1504723&view=ms.vss-test-web.build-test-results-tab&runId=42886656&paneView=debug&resultId=103888)
Failed test:
```
net7.0-windows-Release-x64-CoreCLR_release-Windows.81.Amd64.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.PropagateExceptionFromOnStart
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnContinueBeforePause
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnExecuteCustomCommand
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnPauseAndContinueThenStop
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnPauseThenStop
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnStartThenStop
- System.ServiceProcess.Tests.ServiceControllerTests.ConstructWithDisplayName
- System.ServiceProcess.Tests.ServiceControllerTests.ConstructWithMachineName
- System.ServiceProcess.Tests.ServiceControllerTests.ConstructWithServiceName
- System.ServiceProcess.Tests.ServiceControllerTests.Start_NullArg_ThrowsArgumentNullException
- System.ServiceProcess.Tests.ServiceControllerTests.*
net7.0-windows-Release-x64-CoreCLR_release-(Windows.Nano.1809.Amd64.Open)windows.10.amd64.serverrs5.open@mcr.microsoft.com/dotnet-buildtools/prereqs:nanoserver-1809-helix-amd64-08e8e40-20200107182504
- System.ServiceProcess.Tests.ServiceBaseTests.PropagateExceptionFromOnStart
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnContinueBeforePause
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnExecuteCustomCommand
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x64-CoreCLR_release-(Windows.Server.Core.1909.Amd64.Open)windows.10.amd64.server20h1.open@mcr.microsoft.com/dotnet-buildtools/prereqs:windowsservercore-2004-helix-amd64-20200904200251-272704c
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x64-CoreCLR_release-Windows.10.Amd64.Server19H1.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x64-CoreCLR_release-Windows.10.Amd64.ServerRS5.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x86-CoreCLR_release-Windows.10.Amd64.ServerRS5.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x86-CoreCLR_release-Windows.7.Amd64.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x86-CoreCLR_release-Windows.10.Amd64.Server19H1.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
```
**Error message:**
```
System.TypeLoadException : Could not load type 'Advapi32' from assembly 'System.ServiceProcess.ServiceController.TestService, Version=7.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' because the method 'CreateService' has no implementation (no RVA).
Stack trace
at System.ServiceProcess.Tests.TestServiceInstaller.Install()
at System.ServiceProcess.Tests.TestServiceProvider.CreateTestServices() in /_/src/libraries/System.ServiceProcess.ServiceController/tests/TestServiceProvider.cs:line 131
at System.ServiceProcess.Tests.TestServiceProvider..ctor(String serviceName) in /_/src/libraries/System.ServiceProcess.ServiceController/tests/TestServiceProvider.cs:line 84
at System.ServiceProcess.Tests.TestServiceProvider..ctor() in /_/src/libraries/System.ServiceProcess.ServiceController/tests/TestServiceProvider.cs:line 69
at System.ServiceProcess.Tests.ServiceBaseTests..ctor() in /_/src/libraries/System.ServiceProcess.ServiceController/tests/ServiceBaseTests.cs:line 29
at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean wrapExceptions) in /_/src/coreclr/System.Private.CoreLib/src/System/RuntimeType.CoreCLR.cs:line 3747
```
|
1.0
|
Test failure System.ServiceProcess.Tests.ServiceBaseTests.LogWritten - Run: [runtime-libraries-coreclr outerloop 20211209.3](https://dev.azure.com/dnceng/public/_build/results?buildId=1504723&view=ms.vss-test-web.build-test-results-tab&runId=42886656&paneView=debug&resultId=103888)
Failed test:
```
net7.0-windows-Release-x64-CoreCLR_release-Windows.81.Amd64.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.PropagateExceptionFromOnStart
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnContinueBeforePause
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnExecuteCustomCommand
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnPauseAndContinueThenStop
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnPauseThenStop
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnStartThenStop
- System.ServiceProcess.Tests.ServiceControllerTests.ConstructWithDisplayName
- System.ServiceProcess.Tests.ServiceControllerTests.ConstructWithMachineName
- System.ServiceProcess.Tests.ServiceControllerTests.ConstructWithServiceName
- System.ServiceProcess.Tests.ServiceControllerTests.Start_NullArg_ThrowsArgumentNullException
- System.ServiceProcess.Tests.ServiceControllerTests.*
net7.0-windows-Release-x64-CoreCLR_release-(Windows.Nano.1809.Amd64.Open)windows.10.amd64.serverrs5.open@mcr.microsoft.com/dotnet-buildtools/prereqs:nanoserver-1809-helix-amd64-08e8e40-20200107182504
- System.ServiceProcess.Tests.ServiceBaseTests.PropagateExceptionFromOnStart
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnContinueBeforePause
- System.ServiceProcess.Tests.ServiceBaseTests.TestOnExecuteCustomCommand
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x64-CoreCLR_release-(Windows.Server.Core.1909.Amd64.Open)windows.10.amd64.server20h1.open@mcr.microsoft.com/dotnet-buildtools/prereqs:windowsservercore-2004-helix-amd64-20200904200251-272704c
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x64-CoreCLR_release-Windows.10.Amd64.Server19H1.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x64-CoreCLR_release-Windows.10.Amd64.ServerRS5.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x86-CoreCLR_release-Windows.10.Amd64.ServerRS5.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x86-CoreCLR_release-Windows.7.Amd64.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
net7.0-windows-Release-x86-CoreCLR_release-Windows.10.Amd64.Server19H1.Open
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten
- System.ServiceProcess.Tests.ServiceBaseTests.LogWritten_AutoLog_False
- System.ServiceProcess.Tests.ServiceBaseTests.*
```
**Error message:**
```
System.TypeLoadException : Could not load type 'Advapi32' from assembly 'System.ServiceProcess.ServiceController.TestService, Version=7.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' because the method 'CreateService' has no implementation (no RVA).
Stack trace
at System.ServiceProcess.Tests.TestServiceInstaller.Install()
at System.ServiceProcess.Tests.TestServiceProvider.CreateTestServices() in /_/src/libraries/System.ServiceProcess.ServiceController/tests/TestServiceProvider.cs:line 131
at System.ServiceProcess.Tests.TestServiceProvider..ctor(String serviceName) in /_/src/libraries/System.ServiceProcess.ServiceController/tests/TestServiceProvider.cs:line 84
at System.ServiceProcess.Tests.TestServiceProvider..ctor() in /_/src/libraries/System.ServiceProcess.ServiceController/tests/TestServiceProvider.cs:line 69
at System.ServiceProcess.Tests.ServiceBaseTests..ctor() in /_/src/libraries/System.ServiceProcess.ServiceController/tests/ServiceBaseTests.cs:line 29
at System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean wrapExceptions) in /_/src/coreclr/System.Private.CoreLib/src/System/RuntimeType.CoreCLR.cs:line 3747
```
|
process
|
test failure system serviceprocess tests servicebasetests logwritten run failed test windows release coreclr release windows open system serviceprocess tests servicebasetests logwritten system serviceprocess tests servicebasetests logwritten autolog false system serviceprocess tests servicebasetests propagateexceptionfromonstart system serviceprocess tests servicebasetests testoncontinuebeforepause system serviceprocess tests servicebasetests testonexecutecustomcommand system serviceprocess tests servicebasetests testonpauseandcontinuethenstop system serviceprocess tests servicebasetests testonpausethenstop system serviceprocess tests servicebasetests testonstartthenstop system serviceprocess tests servicecontrollertests constructwithdisplayname system serviceprocess tests servicecontrollertests constructwithmachinename system serviceprocess tests servicecontrollertests constructwithservicename system serviceprocess tests servicecontrollertests start nullarg throwsargumentnullexception system serviceprocess tests servicecontrollertests windows release coreclr release windows nano open windows open mcr microsoft com dotnet buildtools prereqs nanoserver helix system serviceprocess tests servicebasetests propagateexceptionfromonstart system serviceprocess tests servicebasetests testoncontinuebeforepause system serviceprocess tests servicebasetests testonexecutecustomcommand system serviceprocess tests servicebasetests windows release coreclr release windows server core open windows open mcr microsoft com dotnet buildtools prereqs windowsservercore helix system serviceprocess tests servicebasetests logwritten system serviceprocess tests servicebasetests logwritten autolog false system serviceprocess tests servicebasetests windows release coreclr release windows open system serviceprocess tests servicebasetests logwritten system serviceprocess tests servicebasetests logwritten autolog false system serviceprocess tests servicebasetests windows release coreclr release windows open system serviceprocess tests servicebasetests logwritten system serviceprocess tests servicebasetests logwritten autolog false system serviceprocess tests servicebasetests windows release coreclr release windows open system serviceprocess tests servicebasetests logwritten system serviceprocess tests servicebasetests logwritten autolog false system serviceprocess tests servicebasetests windows release coreclr release windows open system serviceprocess tests servicebasetests logwritten system serviceprocess tests servicebasetests logwritten autolog false system serviceprocess tests servicebasetests windows release coreclr release windows open system serviceprocess tests servicebasetests logwritten system serviceprocess tests servicebasetests logwritten autolog false system serviceprocess tests servicebasetests error message system typeloadexception could not load type from assembly system serviceprocess servicecontroller testservice version culture neutral publickeytoken because the method createservice has no implementation no rva stack trace at system serviceprocess tests testserviceinstaller install at system serviceprocess tests testserviceprovider createtestservices in src libraries system serviceprocess servicecontroller tests testserviceprovider cs line at system serviceprocess tests testserviceprovider ctor string servicename in src libraries system serviceprocess servicecontroller tests testserviceprovider cs line at system serviceprocess tests testserviceprovider ctor in src libraries system serviceprocess servicecontroller tests testserviceprovider cs line at system serviceprocess tests servicebasetests ctor in src libraries system serviceprocess servicecontroller tests servicebasetests cs line at system runtimetype createinstancedefaultctor boolean publiconly boolean wrapexceptions in src coreclr system private corelib src system runtimetype coreclr cs line
| 1
|
73,537
| 3,413,432,201
|
IssuesEvent
|
2015-12-06 17:53:01
|
blackwatchint/blackwatchint
|
https://api.github.com/repos/blackwatchint/blackwatchint
|
closed
|
FSF Al Rayak
|
Medium Priority Meeting Modpack Rejected Request
|
**Description:**
I'm pleased to present my latest terrain for Arma 3 : Al Rayak.
The map is 20 x 20 km, and aims at loosely representing a middle eastern country.
40 km of coastline, 7 cities, 3 seaports, 2 main rivers, 4 airfields, about 100 villages, many bridges, and far more trees than would be here in reality.
The vegetation is mostly a placeholder and will be updated in the future. I also plan to add many rock formations to give the whole area a more arid and mountaineous feeling.
**Download:** http://www.armaholic.com/page.php?id=28564
**Size:** 267MB
|
1.0
|
FSF Al Rayak - **Description:**
I'm pleased to present my latest terrain for Arma 3 : Al Rayak.
The map is 20 x 20 km, and aims at loosely representing a middle eastern country.
40 km of coastline, 7 cities, 3 seaports, 2 main rivers, 4 airfields, about 100 villages, many bridges, and far more trees than would be here in reality.
The vegetation is mostly a placeholder and will be updated in the future. I also plan to add many rock formations to give the whole area a more arid and mountaineous feeling.
**Download:** http://www.armaholic.com/page.php?id=28564
**Size:** 267MB
|
non_process
|
fsf al rayak description i m pleased to present my latest terrain for arma al rayak the map is x km and aims at loosely representing a middle eastern country km of coastline cities seaports main rivers airfields about villages many bridges and far more trees than would be here in reality the vegetation is mostly a placeholder and will be updated in the future i also plan to add many rock formations to give the whole area a more arid and mountaineous feeling download size
| 0
|
18,346
| 24,467,684,316
|
IssuesEvent
|
2022-10-07 16:29:04
|
Julialt/GECKO-LISA-2022
|
https://api.github.com/repos/Julialt/GECKO-LISA-2022
|
opened
|
Modernize numbered do-loops in POSTPROCESSING/LIB/parse_chem_module.f90
|
Postprocessor For later
|
Warning: Fortran 2018 obsolescent feature: Labeled DO statement at (1)
../LIB/parse_chem_module.f90:565:23:
Need to make sure routine still operates as intended.
|
1.0
|
Modernize numbered do-loops in POSTPROCESSING/LIB/parse_chem_module.f90 - Warning: Fortran 2018 obsolescent feature: Labeled DO statement at (1)
../LIB/parse_chem_module.f90:565:23:
Need to make sure routine still operates as intended.
|
process
|
modernize numbered do loops in postprocessing lib parse chem module warning fortran obsolescent feature labeled do statement at lib parse chem module need to make sure routine still operates as intended
| 1
|
299,607
| 25,913,555,481
|
IssuesEvent
|
2022-12-15 15:40:17
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
DISABLED test_serialization_map_location (__main__.TestOldSerialization)
|
module: serialization triaged module: flaky-tests skipped
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_serialization_map_location&suite=TestOldSerialization&file=test_serialization.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7402034645).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green.
cc @mruberry
|
1.0
|
DISABLED test_serialization_map_location (__main__.TestOldSerialization) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_serialization_map_location&suite=TestOldSerialization&file=test_serialization.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7402034645).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green.
cc @mruberry
|
non_process
|
disabled test serialization map location main testoldserialization platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with red and green cc mruberry
| 0
|
11,743
| 14,582,311,980
|
IssuesEvent
|
2020-12-18 12:10:05
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Add types for `$on('beforeExit')`
|
bug/2-confirmed kind/bug process/candidate team/client tech/typescript
|
The types for `$on('beforeExit')` don't exist, hence the API is not discoverable and useable from TypeScript.
|
1.0
|
Add types for `$on('beforeExit')` - The types for `$on('beforeExit')` don't exist, hence the API is not discoverable and useable from TypeScript.
|
process
|
add types for on beforeexit the types for on beforeexit don t exist hence the api is not discoverable and useable from typescript
| 1
|
282,473
| 8,706,859,744
|
IssuesEvent
|
2018-12-06 05:05:36
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
developer.mozilla.org - desktop site instead of mobile site
|
browser-firefox-mobile priority-important
|
<!-- @browser: Firefox Mobile 65.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:65.0) Gecko/65.0 Firefox/65.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://developer.mozilla.org/en-US/docs/Tools/Remote_Debugging/Debugging_Firefox_for_Android_with_WebIDE
**Browser / Version**: Firefox Mobile 65.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: repair all
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2018/12/bcb21a5b-069c-49b4-8b5d-dc63e8d345c7.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181205102000</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
developer.mozilla.org - desktop site instead of mobile site - <!-- @browser: Firefox Mobile 65.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:65.0) Gecko/65.0 Firefox/65.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://developer.mozilla.org/en-US/docs/Tools/Remote_Debugging/Debugging_Firefox_for_Android_with_WebIDE
**Browser / Version**: Firefox Mobile 65.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: repair all
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2018/12/bcb21a5b-069c-49b4-8b5d-dc63e8d345c7.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181205102000</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
developer mozilla org desktop site instead of mobile site url browser version firefox mobile operating system android tested another browser no problem type desktop site instead of mobile site description repair all steps to reproduce browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel nightly from with ❤️
| 0
|
13,674
| 16,419,805,952
|
IssuesEvent
|
2021-05-19 11:10:55
|
Bedrohung-der-Bienen/Transformationsfelder-Digitalisierung
|
https://api.github.com/repos/Bedrohung-der-Bienen/Transformationsfelder-Digitalisierung
|
opened
|
Passwort von Benutzer soll geschützt sein
|
backend datenbank register process
|
# Szenario: Der Benutzer will, dass sein Passwort geschützt ist und keiner es sehen kann.
- **Gegeben** Der Benutzer will sich registrieren und ist auf der Registrierungsseite
- **Wenn** der Benutzer sich registrieren will
- **Dann** gibt er die Daten ein
- **Und** sein Passwort soll für keinen ersichtlich sein
- **Und**
Das Passwort des Benutzers soll in der Datenbank geschützt werden.
-----
__Als__ Benutzer,
__möchte ich__ das mein Passwort nicht für jeden einsehbar ist,
__damit__ mein Passwort geschützt ist.
|
1.0
|
Passwort von Benutzer soll geschützt sein - # Szenario: Der Benutzer will, dass sein Passwort geschützt ist und keiner es sehen kann.
- **Gegeben** Der Benutzer will sich registrieren und ist auf der Registrierungsseite
- **Wenn** der Benutzer sich registrieren will
- **Dann** gibt er die Daten ein
- **Und** sein Passwort soll für keinen ersichtlich sein
- **Und**
Das Passwort des Benutzers soll in der Datenbank geschützt werden.
-----
__Als__ Benutzer,
__möchte ich__ das mein Passwort nicht für jeden einsehbar ist,
__damit__ mein Passwort geschützt ist.
|
process
|
passwort von benutzer soll geschützt sein szenario der benutzer will dass sein passwort geschützt ist und keiner es sehen kann gegeben der benutzer will sich registrieren und ist auf der registrierungsseite wenn der benutzer sich registrieren will dann gibt er die daten ein und sein passwort soll für keinen ersichtlich sein und das passwort des benutzers soll in der datenbank geschützt werden als benutzer möchte ich das mein passwort nicht für jeden einsehbar ist damit mein passwort geschützt ist
| 1
|
272,260
| 29,794,999,432
|
IssuesEvent
|
2023-06-16 01:03:08
|
billmcchesney1/hadoop
|
https://api.github.com/repos/billmcchesney1/hadoop
|
closed
|
CVE-2021-23445 (Medium) detected in jquery.dataTables-1.10.18.min.js - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2021-23445 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery.dataTables-1.10.18.min.js</b></p></summary>
<p>DataTables enhances HTML tables with the ability to sort, filter and page the data in the table very easily. It provides a comprehensive API and set of configuration options, allowing you to consume data from virtually any data source.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/datatables/1.10.18/js/jquery.dataTables.min.js">https://cdnjs.cloudflare.com/ajax/libs/datatables/1.10.18/js/jquery.dataTables.min.js</a></p>
<p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery.dataTables-1.10.18.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p>
<p>Found in base branch: <b>trunk</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package datatables.net before 1.11.3. If an array is passed to the HTML escape entities function it would not have its contents escaped.
<p>Publish Date: 2021-09-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23445>CVE-2021-23445</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23445">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23445</a></p>
<p>Release Date: 2021-09-27</p>
<p>Fix Resolution: datatables.net - 1.11.3</p>
</p>
</details>
<p></p>
|
True
|
CVE-2021-23445 (Medium) detected in jquery.dataTables-1.10.18.min.js - autoclosed - ## CVE-2021-23445 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery.dataTables-1.10.18.min.js</b></p></summary>
<p>DataTables enhances HTML tables with the ability to sort, filter and page the data in the table very easily. It provides a comprehensive API and set of configuration options, allowing you to consume data from virtually any data source.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/datatables/1.10.18/js/jquery.dataTables.min.js">https://cdnjs.cloudflare.com/ajax/libs/datatables/1.10.18/js/jquery.dataTables.min.js</a></p>
<p>Path to vulnerable library: /hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js,/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery.dataTables-1.10.18.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/hadoop/commit/6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a">6dcd8400219941dcbd7fb0f6b980cc2c6a2a6b0a</a></p>
<p>Found in base branch: <b>trunk</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package datatables.net before 1.11.3. If an array is passed to the HTML escape entities function it would not have its contents escaped.
<p>Publish Date: 2021-09-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23445>CVE-2021-23445</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23445">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23445</a></p>
<p>Release Date: 2021-09-27</p>
<p>Fix Resolution: datatables.net - 1.11.3</p>
</p>
</details>
<p></p>
|
non_process
|
cve medium detected in jquery datatables min js autoclosed cve medium severity vulnerability vulnerable library jquery datatables min js datatables enhances html tables with the ability to sort filter and page the data in the table very easily it provides a comprehensive api and set of configuration options allowing you to consume data from virtually any data source library home page a href path to vulnerable library hadoop yarn project hadoop yarn hadoop yarn common src main resources webapps static dt js jquery datatables min js hadoop yarn project hadoop yarn hadoop yarn common target classes webapps static dt js jquery datatables min js dependency hierarchy x jquery datatables min js vulnerable library found in head commit a href found in base branch trunk vulnerability details this affects the package datatables net before if an array is passed to the html escape entities function it would not have its contents escaped publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution datatables net
| 0
|
8,949
| 12,058,976,384
|
IssuesEvent
|
2020-04-15 18:25:54
|
hashgraph/hedera-mirror-node
|
https://api.github.com/repos/hashgraph/hedera-mirror-node
|
opened
|
Auto Clean up old master images in GCR
|
P2 enhancement process
|
**Problem**
As part of CI we will push images to GCR on master merges.
This will create a build up of images that we'll want to remove.
**Solution**
Update circle ci config to remove images older than 7 days
**Additional Context**
Current logic exists to do this but we say discrepancies in the time filter at the precision of a few hours. It's thought this doesn't affects precision of days.
For now logic is left as an echo to see intentions but not actually remove images.
|
1.0
|
Auto Clean up old master images in GCR - **Problem**
As part of CI we will push images to GCR on master merges.
This will create a build up of images that we'll want to remove.
**Solution**
Update circle ci config to remove images older than 7 days
**Additional Context**
Current logic exists to do this but we say discrepancies in the time filter at the precision of a few hours. It's thought this doesn't affects precision of days.
For now logic is left as an echo to see intentions but not actually remove images.
|
process
|
auto clean up old master images in gcr problem as part of ci we will push images to gcr on master merges this will create a build up of images that we ll want to remove solution update circle ci config to remove images older than days additional context current logic exists to do this but we say discrepancies in the time filter at the precision of a few hours it s thought this doesn t affects precision of days for now logic is left as an echo to see intentions but not actually remove images
| 1
|
2,217
| 5,059,639,838
|
IssuesEvent
|
2016-12-22 08:53:31
|
itsyouonline/identityserver
|
https://api.github.com/repos/itsyouonline/identityserver
|
closed
|
Security issue: Members of organizations can perform owner tasks
|
priority_critical process_wontfix
|
As member of an organization I was allowed to:
* Remove an invite
* Invite a user
* Add api keys
|
1.0
|
Security issue: Members of organizations can perform owner tasks - As member of an organization I was allowed to:
* Remove an invite
* Invite a user
* Add api keys
|
process
|
security issue members of organizations can perform owner tasks as member of an organization i was allowed to remove an invite invite a user add api keys
| 1
|
11,336
| 14,147,501,791
|
IssuesEvent
|
2020-11-10 20:55:02
|
retaildevcrews/ngsa
|
https://api.github.com/repos/retaildevcrews/ngsa
|
closed
|
NGSA - Survey - M1 - Sprint1
|
Process Retro
|
### How well was the backlog maintained
- [ ] We did not use a backlog.
- [ ] We created a backlog, but did not maintain it.
- [ ] Our backlog was loosely defined for the project.
- [ ] Our backlog was organized into well-defined work items.
- [ ] Our backlog was organized into well-defined work items and was actively maintained.
- [x] Our backlog was thorough, maintained, and every pull request was associated with a work item.
### How effective was sprint planning
- [ ] We did not do any planning.
- [ ] We planned some of the work.
- [ ] We planned but did not estimate the work.
- [ ] We underestimated and didn’t close out the sprint.
- [ ] All work was planned and well estimated.
- [x] All work was planned, well estimated, and had well-defined acceptance criteria.
### How useful were stand ups
- [ ] We didn't have stand ups.
- [ ] We didn’t meet with any regular cadence.
- [ ] Participation was not consistent.
- [ ] They were too long, with too much detail.
- [ ] People shared updates, but I usually didn’t get unblocked.
- [x] Very efficient. People shared openly and received the help they needed.
### How informative was the retrospective
- [ ] We didn’t have a retrospective.
- [ ] We had a retrospective because they are part of our process, but it wasn't useful.
- [ ] Retrospectives helped us understand and improve some aspects of the project and team interactions.
- [x] Retrospectives were key to our team’s success. We surfaced areas of improvement and acted on them.
### How thorough were design reviews
- [ ] We didn’t do any design reviews.
- [ ] We did a high-level system/architecture review.
- [ ] We produced and reviewed architecture and component/sequence/data flow diagrams.
- [ ] We produced and reviewed all design artifacts and solicited feedback from domain experts.
- [x] We produced and reviewed all design artifacts and solicited feedback from domain experts. As the project progressed, we actively validated and updated our designs, based on our learnings.
### How effective were code reviews
- [ ] We didn’t review code changes
- [ ] We used automated tooling to enforce basic convention/standards.
- [x] We used automated tooling to enforce basic convention/standards. Code changes required approval from one individual on the team.
- [ ] We used automated tooling to enforce basic convention/standards. Code changes required approval from two or more individuals on the team.
- [x] We used automated tooling to enforce basic convention/standards. Code changes required approval from two or more individuals on the team. Domain experts were added to reviews, when applicable.
### How were changes introduced to the codebase
- [ ] No governance; anyone could introduce changes to any part/branch of the codebase.
- [ ] Branches were used to isolate new changes and folded into an upstream branch via Pull Request.
- [ ] Branches were used to isolate new changes and folded into an upstream branch via Pull Request. Pull Requests were scoped to smaller, more granular changes.
- [ ] Branches were used to isolate new changes. Pull Requests were used to fold changes into a primary working branch. Multiple upstream branches were used to manage changes. Main is always shippable. Branch policies and/or commit hooks were in place.
- [x] Branches were used to isolate new changes. Pull Requests were used to fold changes into a primary working branch. Branch names and commit message(s) follow a convention and always reference back to a work item. Multiple upstream branches were used to manage/validate/promote changes. Main represents `last known good` and is always shippable. Branch policies and/or commit hooks were in place.
### How rigorous was the code validation
- [ ] We did not do any testing.
- [ ] Our work was primarily validated through manual testing.
- [ ] We consciously did not allocate time for automated testing.
- [ ] Automated tests existed in the project, but were challenging to run.
- [ ] New tests or test modifications accompanied every significant code change.
- [x] Our project contained automated tests, every check-in must have a test, and they ran as part of CI.
### How smooth was continuous integration
- [ ] We didn’t have any continuous integration configured.
- [ ] Builds were always done on a central build server.
- [ ] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for some of the code bases.
- [ ] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for all the code bases.
- [x] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for all the code bases. Built artifacts were always shared from a central artifact/package server.
### How reliable was continuous delivery
- [x] We didn’t have any continuous delivery configured.
- [ ] We had scripts for some deployments.
- [ ] We had scripts for both creating and deploying some services to an environment.
- [ ] We had scripts for both creating and deploying all services to an environment.
- [ ] There were multiple environments and deployments into them were automated and well understood.
### How was observability achieved
- [ ] We didn’t add any logging, metrics, tracing, or monitoring.
- [ ] We added some logging, metrics, tracing, and/or monitoring but it was not done consistently across all system components.
- [ ] We added logging, metrics, tracing, and/or monitoring across most components. However, the implementation was not complete; ex) we did not use correlation ids or business context was missing or alerts were not defined for monitored components, etc.
- [ ] We added extensive logging, metrics, tracing, and monitoring alerts to facilitate debugging, viewing of historical trends, understanding control flow, and the current state of the system.
- [x] We designed and implemented instrumentation to help run the solution with the goal of adding value to the customer.
### How was security evaluated in this engagement
- [ ] We did not evaluate security as a part of this engagement.
- [ ] Security was evaluated only at the end of the engagement; little to no time was available to remediate issues.
- [ ] Security was evaluated only at the end of the engagement; there was time remaining prior to hand-off to fix issues (if needed).
- [ ] Secure design was considered during the design and implementation phases but with no ongoing support.
- [x] Secure design was considered during the design and implementation phases, and ongoing automated testing was introduced to the DevSecOps process prior to hand off.
### How was impactful Product Group engineering feedback provided
- [ ] Microsoft products/services worked flawlessly without any issues, therefore, there was no engineering feedback to share.
- [ ] We encountered some friction with Microsoft products/services but didn’t submit any engineering feedback for the Product Group.
- [ ] We shared our feedback directly with the Product Group but only in an ad-hoc manner (i.e. via email, teams, etc).
- [ ] Mostly at the end of the engagement, we submitted some engineering feedback via CSE Feedback tool.
- [x] On an ongoing basis, we submitted all of the relevant high-quality feedback via CSE Feedback tool, including priority, scenario-based description, repro steps with screenshots, and attached relevant email threads with the Product Group.
|
1.0
|
NGSA - Survey - M1 - Sprint1 - ### How well was the backlog maintained
- [ ] We did not use a backlog.
- [ ] We created a backlog, but did not maintain it.
- [ ] Our backlog was loosely defined for the project.
- [ ] Our backlog was organized into well-defined work items.
- [ ] Our backlog was organized into well-defined work items and was actively maintained.
- [x] Our backlog was thorough, maintained, and every pull request was associated with a work item.
### How effective was sprint planning
- [ ] We did not do any planning.
- [ ] We planned some of the work.
- [ ] We planned but did not estimate the work.
- [ ] We underestimated and didn’t close out the sprint.
- [ ] All work was planned and well estimated.
- [x] All work was planned, well estimated, and had well-defined acceptance criteria.
### How useful were stand ups
- [ ] We didn't have stand ups.
- [ ] We didn’t meet with any regular cadence.
- [ ] Participation was not consistent.
- [ ] They were too long, with too much detail.
- [ ] People shared updates, but I usually didn’t get unblocked.
- [x] Very efficient. People shared openly and received the help they needed.
### How informative was the retrospective
- [ ] We didn’t have a retrospective.
- [ ] We had a retrospective because they are part of our process, but it wasn't useful.
- [ ] Retrospectives helped us understand and improve some aspects of the project and team interactions.
- [x] Retrospectives were key to our team’s success. We surfaced areas of improvement and acted on them.
### How thorough were design reviews
- [ ] We didn’t do any design reviews.
- [ ] We did a high-level system/architecture review.
- [ ] We produced and reviewed architecture and component/sequence/data flow diagrams.
- [ ] We produced and reviewed all design artifacts and solicited feedback from domain experts.
- [x] We produced and reviewed all design artifacts and solicited feedback from domain experts. As the project progressed, we actively validated and updated our designs, based on our learnings.
### How effective were code reviews
- [ ] We didn’t review code changes
- [ ] We used automated tooling to enforce basic convention/standards.
- [x] We used automated tooling to enforce basic convention/standards. Code changes required approval from one individual on the team.
- [ ] We used automated tooling to enforce basic convention/standards. Code changes required approval from two or more individuals on the team.
- [x] We used automated tooling to enforce basic convention/standards. Code changes required approval from two or more individuals on the team. Domain experts were added to reviews, when applicable.
### How were changes introduced to the codebase
- [ ] No governance; anyone could introduce changes to any part/branch of the codebase.
- [ ] Branches were used to isolate new changes and folded into an upstream branch via Pull Request.
- [ ] Branches were used to isolate new changes and folded into an upstream branch via Pull Request. Pull Requests were scoped to smaller, more granular changes.
- [ ] Branches were used to isolate new changes. Pull Requests were used to fold changes into a primary working branch. Multiple upstream branches were used to manage changes. Main is always shippable. Branch policies and/or commit hooks were in place.
- [x] Branches were used to isolate new changes. Pull Requests were used to fold changes into a primary working branch. Branch names and commit message(s) follow a convention and always reference back to a work item. Multiple upstream branches were used to manage/validate/promote changes. Main represents `last known good` and is always shippable. Branch policies and/or commit hooks were in place.
### How rigorous was the code validation
- [ ] We did not do any testing.
- [ ] Our work was primarily validated through manual testing.
- [ ] We consciously did not allocate time for automated testing.
- [ ] Automated tests existed in the project, but were challenging to run.
- [ ] New tests or test modifications accompanied every significant code change.
- [x] Our project contained automated tests, every check-in must have a test, and they ran as part of CI.
### How smooth was continuous integration
- [ ] We didn’t have any continuous integration configured.
- [ ] Builds were always done on a central build server.
- [ ] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for some of the code bases.
- [ ] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for all the code bases.
- [x] Builds are always done on a central build server. Automated tests prevented check-ins that would result in a broken build, for all the code bases. Built artifacts were always shared from a central artifact/package server.
### How reliable was continuous delivery
- [x] We didn’t have any continuous delivery configured.
- [ ] We had scripts for some deployments.
- [ ] We had scripts for both creating and deploying some services to an environment.
- [ ] We had scripts for both creating and deploying all services to an environment.
- [ ] There were multiple environments and deployments into them were automated and well understood.
### How was observability achieved
- [ ] We didn’t add any logging, metrics, tracing, or monitoring.
- [ ] We added some logging, metrics, tracing, and/or monitoring but it was not done consistently across all system components.
- [ ] We added logging, metrics, tracing, and/or monitoring across most components. However, the implementation was not complete; ex) we did not use correlation ids or business context was missing or alerts were not defined for monitored components, etc.
- [ ] We added extensive logging, metrics, tracing, and monitoring alerts to facilitate debugging, viewing of historical trends, understanding control flow, and the current state of the system.
- [x] We designed and implemented instrumentation to help run the solution with the goal of adding value to the customer.
### How was security evaluated in this engagement
- [ ] We did not evaluate security as a part of this engagement.
- [ ] Security was evaluated only at the end of the engagement; little to no time was available to remediate issues.
- [ ] Security was evaluated only at the end of the engagement; there was time remaining prior to hand-off to fix issues (if needed).
- [ ] Secure design was considered during the design and implementation phases but with no ongoing support.
- [x] Secure design was considered during the design and implementation phases, and ongoing automated testing was introduced to the DevSecOps process prior to hand off.
### How was impactful Product Group engineering feedback provided
- [ ] Microsoft products/services worked flawlessly without any issues, therefore, there was no engineering feedback to share.
- [ ] We encountered some friction with Microsoft products/services but didn’t submit any engineering feedback for the Product Group.
- [ ] We shared our feedback directly with the Product Group but only in an ad-hoc manner (i.e. via email, teams, etc).
- [ ] Mostly at the end of the engagement, we submitted some engineering feedback via CSE Feedback tool.
- [x] On an ongoing basis, we submitted all of the relevant high-quality feedback via CSE Feedback tool, including priority, scenario-based description, repro steps with screenshots, and attached relevant email threads with the Product Group.
|
process
|
ngsa survey how well was the backlog maintained we did not use a backlog we created a backlog but did not maintain it our backlog was loosely defined for the project our backlog was organized into well defined work items our backlog was organized into well defined work items and was actively maintained our backlog was thorough maintained and every pull request was associated with a work item how effective was sprint planning we did not do any planning we planned some of the work we planned but did not estimate the work we underestimated and didn’t close out the sprint all work was planned and well estimated all work was planned well estimated and had well defined acceptance criteria how useful were stand ups we didn t have stand ups we didn’t meet with any regular cadence participation was not consistent they were too long with too much detail people shared updates but i usually didn’t get unblocked very efficient people shared openly and received the help they needed how informative was the retrospective we didn’t have a retrospective we had a retrospective because they are part of our process but it wasn t useful retrospectives helped us understand and improve some aspects of the project and team interactions retrospectives were key to our team’s success we surfaced areas of improvement and acted on them how thorough were design reviews we didn’t do any design reviews we did a high level system architecture review we produced and reviewed architecture and component sequence data flow diagrams we produced and reviewed all design artifacts and solicited feedback from domain experts we produced and reviewed all design artifacts and solicited feedback from domain experts as the project progressed we actively validated and updated our designs based on our learnings how effective were code reviews we didn’t review code changes we used automated tooling to enforce basic convention standards we used automated tooling to enforce basic convention standards code changes required approval from one individual on the team we used automated tooling to enforce basic convention standards code changes required approval from two or more individuals on the team we used automated tooling to enforce basic convention standards code changes required approval from two or more individuals on the team domain experts were added to reviews when applicable how were changes introduced to the codebase no governance anyone could introduce changes to any part branch of the codebase branches were used to isolate new changes and folded into an upstream branch via pull request branches were used to isolate new changes and folded into an upstream branch via pull request pull requests were scoped to smaller more granular changes branches were used to isolate new changes pull requests were used to fold changes into a primary working branch multiple upstream branches were used to manage changes main is always shippable branch policies and or commit hooks were in place branches were used to isolate new changes pull requests were used to fold changes into a primary working branch branch names and commit message s follow a convention and always reference back to a work item multiple upstream branches were used to manage validate promote changes main represents last known good and is always shippable branch policies and or commit hooks were in place how rigorous was the code validation we did not do any testing our work was primarily validated through manual testing we consciously did not allocate time for automated testing automated tests existed in the project but were challenging to run new tests or test modifications accompanied every significant code change our project contained automated tests every check in must have a test and they ran as part of ci how smooth was continuous integration we didn’t have any continuous integration configured builds were always done on a central build server builds are always done on a central build server automated tests prevented check ins that would result in a broken build for some of the code bases builds are always done on a central build server automated tests prevented check ins that would result in a broken build for all the code bases builds are always done on a central build server automated tests prevented check ins that would result in a broken build for all the code bases built artifacts were always shared from a central artifact package server how reliable was continuous delivery we didn’t have any continuous delivery configured we had scripts for some deployments we had scripts for both creating and deploying some services to an environment we had scripts for both creating and deploying all services to an environment there were multiple environments and deployments into them were automated and well understood how was observability achieved we didn’t add any logging metrics tracing or monitoring we added some logging metrics tracing and or monitoring but it was not done consistently across all system components we added logging metrics tracing and or monitoring across most components however the implementation was not complete ex we did not use correlation ids or business context was missing or alerts were not defined for monitored components etc we added extensive logging metrics tracing and monitoring alerts to facilitate debugging viewing of historical trends understanding control flow and the current state of the system we designed and implemented instrumentation to help run the solution with the goal of adding value to the customer how was security evaluated in this engagement we did not evaluate security as a part of this engagement security was evaluated only at the end of the engagement little to no time was available to remediate issues security was evaluated only at the end of the engagement there was time remaining prior to hand off to fix issues if needed secure design was considered during the design and implementation phases but with no ongoing support secure design was considered during the design and implementation phases and ongoing automated testing was introduced to the devsecops process prior to hand off how was impactful product group engineering feedback provided microsoft products services worked flawlessly without any issues therefore there was no engineering feedback to share we encountered some friction with microsoft products services but didn’t submit any engineering feedback for the product group we shared our feedback directly with the product group but only in an ad hoc manner i e via email teams etc mostly at the end of the engagement we submitted some engineering feedback via cse feedback tool on an ongoing basis we submitted all of the relevant high quality feedback via cse feedback tool including priority scenario based description repro steps with screenshots and attached relevant email threads with the product group
| 1
|
4,104
| 7,050,849,711
|
IssuesEvent
|
2018-01-03 09:00:49
|
KIST-Iceberg/Iceberg
|
https://api.github.com/repos/KIST-Iceberg/Iceberg
|
closed
|
Make Image rotate and filtered data set
|
image pre-processing
|
Rotate : 30 degree
Filter : High-pass Filtering + Gamma Correction
|
1.0
|
Make Image rotate and filtered data set - Rotate : 30 degree
Filter : High-pass Filtering + Gamma Correction
|
process
|
make image rotate and filtered data set rotate degree filter high pass filtering gamma correction
| 1
|
15,173
| 18,947,698,124
|
IssuesEvent
|
2021-11-18 12:01:45
|
UserOfficeProject/stfc-user-office-project
|
https://api.github.com/repos/UserOfficeProject/stfc-user-office-project
|
closed
|
Ensure UO roles are applied to the right people
|
type: process area: uop/stfc
|
We need to ensure that certain people from the User Office can use the User Officer role.
|
1.0
|
Ensure UO roles are applied to the right people - We need to ensure that certain people from the User Office can use the User Officer role.
|
process
|
ensure uo roles are applied to the right people we need to ensure that certain people from the user office can use the user officer role
| 1
|
25,741
| 5,195,722,170
|
IssuesEvent
|
2017-01-23 10:21:20
|
Cornices/cornice.ext.swagger
|
https://api.github.com/repos/Cornices/cornice.ext.swagger
|
opened
|
Document callable interfaces
|
documentation
|
Non-exhaustive list of interfaces that should be documented:
- default tag generator
- default op id generator
- Type converters
- Validator converters
- Schema transformers
|
1.0
|
Document callable interfaces - Non-exhaustive list of interfaces that should be documented:
- default tag generator
- default op id generator
- Type converters
- Validator converters
- Schema transformers
|
non_process
|
document callable interfaces non exhaustive list of interfaces that should be documented default tag generator default op id generator type converters validator converters schema transformers
| 0
|
12,087
| 14,740,057,579
|
IssuesEvent
|
2021-01-07 08:26:31
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Fair Oaks - SA Billing - Late Fee Account List
|
anc-process anp-important ant-bug has attachment
|
In GitLab by @kdjstudios on Oct 3, 2018, 11:03
[Fair_Oaks.xlsx](/uploads/49023d6fa21b43a70eb3b03800f2d693/Fair_Oaks.xlsx)
HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-32924/conversation
|
1.0
|
Fair Oaks - SA Billing - Late Fee Account List - In GitLab by @kdjstudios on Oct 3, 2018, 11:03
[Fair_Oaks.xlsx](/uploads/49023d6fa21b43a70eb3b03800f2d693/Fair_Oaks.xlsx)
HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-32924/conversation
|
process
|
fair oaks sa billing late fee account list in gitlab by kdjstudios on oct uploads fair oaks xlsx hd
| 1
|
8,534
| 11,705,864,815
|
IssuesEvent
|
2020-03-07 18:28:00
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
[VirusTotal] cutt.ly
|
waiting for Mitch whitelisting process
|
**Domains or links**
* http://cutt.ly/
**More Information**
I don't have time yet so can you please take a look at this for me?
I got the following per email. Thanks!
```
Hey folks,
Our client who owns
http://cutt.ly/
has requested that we submit a review to you folks to remove the listing here:
https://www.virustotal.com/gui/url/d4e0ce17c1faeba6a5fad91acbcde883af34c17d35e58d643441568d60e263a8/detection
Given the nature of their URL shortening service it is likely being abused by attackers, so if you see any particularly problematic links or URL's it would be helpful if you could report those to me.
Thanks for your help,
***
******** *******, ******
```
Cheers and thanks again for everything!
P.S.: I will answer the email with a link to this issue.
|
1.0
|
[VirusTotal] cutt.ly - **Domains or links**
* http://cutt.ly/
**More Information**
I don't have time yet so can you please take a look at this for me?
I got the following per email. Thanks!
```
Hey folks,
Our client who owns
http://cutt.ly/
has requested that we submit a review to you folks to remove the listing here:
https://www.virustotal.com/gui/url/d4e0ce17c1faeba6a5fad91acbcde883af34c17d35e58d643441568d60e263a8/detection
Given the nature of their URL shortening service it is likely being abused by attackers, so if you see any particularly problematic links or URL's it would be helpful if you could report those to me.
Thanks for your help,
***
******** *******, ******
```
Cheers and thanks again for everything!
P.S.: I will answer the email with a link to this issue.
|
process
|
cutt ly domains or links more information i don t have time yet so can you please take a look at this for me i got the following per email thanks hey folks our client who owns has requested that we submit a review to you folks to remove the listing here given the nature of their url shortening service it is likely being abused by attackers so if you see any particularly problematic links or url s it would be helpful if you could report those to me thanks for your help cheers and thanks again for everything p s i will answer the email with a link to this issue
| 1
|
97,946
| 29,116,611,287
|
IssuesEvent
|
2023-05-17 01:56:16
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
closed
|
Protocol Buffer error - critical I cannot import tensorflow
|
stat:awaiting response type:bug type:build/install stale TF 2.11
|
<details><summary>Click to expand!</summary>
### Issue Type
Bug
### Have you reproduced the bug with TF nightly?
No
### Source
binary
### Tensorflow Version
2.11
### Custom Code
No
### OS Platform and Distribution
_No response_
### Mobile device
_No response_
### Python version
3.10.6
### Bazel version
_No response_
### GCC/Compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current Behaviour?
Bug occurred today when reloading a jupyter notebook.
I previously tried to install cuML
https://docs.rapids.ai/install#pip-install
but the trace error seems unrelated to it, but mention `Protocol Buffers`:
### Standalone code to reproduce the issue
```shell
The error occurs when :
`import tensorflow`
and consequently any other import of libraries based on tf, such as:
`import umap`
will fail.
I cannot use TF anymore.
```
### Relevant log output
```shell
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[33], line 1
----> 1 import tensorflow as tf
2 #import umap
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/__init__.py:37
34 import sys as _sys
35 import typing as _typing
---> 37 from tensorflow.python.tools import module_util as _module_util
38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
40 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import.
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/python/__init__.py:37
29 # We aim to keep this file minimal and ideally remove completely.
30 # If you are adding a new file with @tf_export decorators,
31 # import it in modules_with_exports.py instead.
32
33 # go/tf-wildcard-import
34 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top
36 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
---> 37 from tensorflow.python.eager import context
39 # pylint: enable=wildcard-import
40
41 # Bring in subpackages.
42 from tensorflow.python import data
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/python/eager/context.py:28
25 from absl import logging
26 import numpy as np
---> 28 from tensorflow.core.framework import function_pb2
29 from tensorflow.core.protobuf import config_pb2
30 from tensorflow.core.protobuf import coordination_config_pb2
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/core/framework/function_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
17 from tensorflow.core.framework import node_def_pb2 as tensorflow_dot_core_dot_framework_dot_node__def__pb2
18 from tensorflow.core.framework import op_def_pb2 as tensorflow_dot_core_dot_framework_dot_op__def__pb2
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/core/framework/attr_value_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/core/framework/tensor_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/core/framework/resource_handle_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
17 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
20 DESCRIPTOR = _descriptor.FileDescriptor(
21 name='tensorflow/core/framework/resource_handle.proto',
22 package='tensorflow',
(...)
26 ,
27 dependencies=[tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2.DESCRIPTOR,tensorflow_dot_core_dot_framework_dot_types__pb2.DESCRIPTOR,])
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/core/framework/tensor_shape_pb2.py:36
13 _sym_db = _symbol_database.Default()
18 DESCRIPTOR = _descriptor.FileDescriptor(
19 name='tensorflow/core/framework/tensor_shape.proto',
20 package='tensorflow',
(...)
23 serialized_pb=_b('\n,tensorflow/core/framework/tensor_shape.proto\x12\ntensorflow\"z\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x02 \x03(\x0b\x32 .tensorflow.TensorShapeProto.Dim\x12\x14\n\x0cunknown_rank\x18\x03 \x01(\x08\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tB\x87\x01\n\x18org.tensorflow.frameworkB\x11TensorShapeProtosP\x01ZSgithub.com/tensorflow/tensorflow/tensorflow/go/core/framework/tensor_shape_go_proto\xf8\x01\x01\x62\x06proto3')
24 )
29 _TENSORSHAPEPROTO_DIM = _descriptor.Descriptor(
30 name='Dim',
31 full_name='tensorflow.TensorShapeProto.Dim',
32 filename=None,
33 file=DESCRIPTOR,
34 containing_type=None,
35 fields=[
---> 36 _descriptor.FieldDescriptor(
37 name='size', full_name='tensorflow.TensorShapeProto.Dim.size', index=0,
38 number=1, type=3, cpp_type=2, label=1,
39 has_default_value=False, default_value=0,
40 message_type=None, enum_type=None, containing_type=None,
41 is_extension=False, extension_scope=None,
42 serialized_options=None, file=DESCRIPTOR),
43 _descriptor.FieldDescriptor(
44 name='name', full_name='tensorflow.TensorShapeProto.Dim.name', index=1,
45 number=2, type=9, cpp_type=9, label=1,
46 has_default_value=False, default_value=_b("").decode('utf-8'),
47 message_type=None, enum_type=None, containing_type=None,
48 is_extension=False, extension_scope=None,
49 serialized_options=None, file=DESCRIPTOR),
50 ],
51 extensions=[
52 ],
53 nested_types=[],
54 enum_types=[
55 ],
56 serialized_options=None,
57 is_extendable=False,
58 syntax='proto3',
59 extension_ranges=[],
60 oneofs=[
61 ],
62 serialized_start=149,
63 serialized_end=182,
64 )
66 _TENSORSHAPEPROTO = _descriptor.Descriptor(
67 name='TensorShapeProto',
68 full_name='tensorflow.TensorShapeProto',
(...)
100 serialized_end=182,
101 )
103 _TENSORSHAPEPROTO_DIM.containing_type = _TENSORSHAPEPROTO
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/google/protobuf/descriptor.py:560, in FieldDescriptor.__new__(cls, name, full_name, index, number, type, cpp_type, label, default_value, message_type, enum_type, containing_type, is_extension, extension_scope, options, serialized_options, has_default_value, containing_oneof, json_name, file, create_key)
554 def __new__(cls, name, full_name, index, number, type, cpp_type, label,
555 default_value, message_type, enum_type, containing_type,
556 is_extension, extension_scope, options=None,
557 serialized_options=None,
558 has_default_value=True, containing_oneof=None, json_name=None,
559 file=None, create_key=None): # pylint: disable=redefined-builtin
--> 560 _message.Message._CheckCalledFromGeneratedFile()
561 if is_extension:
562 return _message.default_pool.FindExtensionByName(full_name)
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
```
</details>
|
1.0
|
Protocol Buffer error - critical I cannot import tensorflow - <details><summary>Click to expand!</summary>
### Issue Type
Bug
### Have you reproduced the bug with TF nightly?
No
### Source
binary
### Tensorflow Version
2.11
### Custom Code
No
### OS Platform and Distribution
_No response_
### Mobile device
_No response_
### Python version
3.10.6
### Bazel version
_No response_
### GCC/Compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current Behaviour?
Bug occurred today when reloading a jupyter notebook.
I previously tried to install cuML
https://docs.rapids.ai/install#pip-install
but the trace error seems unrelated to it, but mention `Protocol Buffers`:
### Standalone code to reproduce the issue
```shell
The error occurs when :
`import tensorflow`
and consequently any other import of libraries based on tf, such as:
`import umap`
will fail.
I cannot use TF anymore.
```
### Relevant log output
```shell
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[33], line 1
----> 1 import tensorflow as tf
2 #import umap
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/__init__.py:37
34 import sys as _sys
35 import typing as _typing
---> 37 from tensorflow.python.tools import module_util as _module_util
38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
40 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import.
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/python/__init__.py:37
29 # We aim to keep this file minimal and ideally remove completely.
30 # If you are adding a new file with @tf_export decorators,
31 # import it in modules_with_exports.py instead.
32
33 # go/tf-wildcard-import
34 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top
36 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
---> 37 from tensorflow.python.eager import context
39 # pylint: enable=wildcard-import
40
41 # Bring in subpackages.
42 from tensorflow.python import data
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/python/eager/context.py:28
25 from absl import logging
26 import numpy as np
---> 28 from tensorflow.core.framework import function_pb2
29 from tensorflow.core.protobuf import config_pb2
30 from tensorflow.core.protobuf import coordination_config_pb2
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/core/framework/function_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
17 from tensorflow.core.framework import node_def_pb2 as tensorflow_dot_core_dot_framework_dot_node__def__pb2
18 from tensorflow.core.framework import op_def_pb2 as tensorflow_dot_core_dot_framework_dot_op__def__pb2
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/core/framework/attr_value_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/core/framework/tensor_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/core/framework/resource_handle_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
17 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
20 DESCRIPTOR = _descriptor.FileDescriptor(
21 name='tensorflow/core/framework/resource_handle.proto',
22 package='tensorflow',
(...)
26 ,
27 dependencies=[tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2.DESCRIPTOR,tensorflow_dot_core_dot_framework_dot_types__pb2.DESCRIPTOR,])
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/tensorflow/core/framework/tensor_shape_pb2.py:36
13 _sym_db = _symbol_database.Default()
18 DESCRIPTOR = _descriptor.FileDescriptor(
19 name='tensorflow/core/framework/tensor_shape.proto',
20 package='tensorflow',
(...)
23 serialized_pb=_b('\n,tensorflow/core/framework/tensor_shape.proto\x12\ntensorflow\"z\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x02 \x03(\x0b\x32 .tensorflow.TensorShapeProto.Dim\x12\x14\n\x0cunknown_rank\x18\x03 \x01(\x08\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tB\x87\x01\n\x18org.tensorflow.frameworkB\x11TensorShapeProtosP\x01ZSgithub.com/tensorflow/tensorflow/tensorflow/go/core/framework/tensor_shape_go_proto\xf8\x01\x01\x62\x06proto3')
24 )
29 _TENSORSHAPEPROTO_DIM = _descriptor.Descriptor(
30 name='Dim',
31 full_name='tensorflow.TensorShapeProto.Dim',
32 filename=None,
33 file=DESCRIPTOR,
34 containing_type=None,
35 fields=[
---> 36 _descriptor.FieldDescriptor(
37 name='size', full_name='tensorflow.TensorShapeProto.Dim.size', index=0,
38 number=1, type=3, cpp_type=2, label=1,
39 has_default_value=False, default_value=0,
40 message_type=None, enum_type=None, containing_type=None,
41 is_extension=False, extension_scope=None,
42 serialized_options=None, file=DESCRIPTOR),
43 _descriptor.FieldDescriptor(
44 name='name', full_name='tensorflow.TensorShapeProto.Dim.name', index=1,
45 number=2, type=9, cpp_type=9, label=1,
46 has_default_value=False, default_value=_b("").decode('utf-8'),
47 message_type=None, enum_type=None, containing_type=None,
48 is_extension=False, extension_scope=None,
49 serialized_options=None, file=DESCRIPTOR),
50 ],
51 extensions=[
52 ],
53 nested_types=[],
54 enum_types=[
55 ],
56 serialized_options=None,
57 is_extendable=False,
58 syntax='proto3',
59 extension_ranges=[],
60 oneofs=[
61 ],
62 serialized_start=149,
63 serialized_end=182,
64 )
66 _TENSORSHAPEPROTO = _descriptor.Descriptor(
67 name='TensorShapeProto',
68 full_name='tensorflow.TensorShapeProto',
(...)
100 serialized_end=182,
101 )
103 _TENSORSHAPEPROTO_DIM.containing_type = _TENSORSHAPEPROTO
File /data0/home/h21/luas6629/venv/lib/python3.10/site-packages/google/protobuf/descriptor.py:560, in FieldDescriptor.__new__(cls, name, full_name, index, number, type, cpp_type, label, default_value, message_type, enum_type, containing_type, is_extension, extension_scope, options, serialized_options, has_default_value, containing_oneof, json_name, file, create_key)
554 def __new__(cls, name, full_name, index, number, type, cpp_type, label,
555 default_value, message_type, enum_type, containing_type,
556 is_extension, extension_scope, options=None,
557 serialized_options=None,
558 has_default_value=True, containing_oneof=None, json_name=None,
559 file=None, create_key=None): # pylint: disable=redefined-builtin
--> 560 _message.Message._CheckCalledFromGeneratedFile()
561 if is_extension:
562 return _message.default_pool.FindExtensionByName(full_name)
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
```
</details>
|
non_process
|
protocol buffer error critical i cannot import tensorflow click to expand issue type bug have you reproduced the bug with tf nightly no source binary tensorflow version custom code no os platform and distribution no response mobile device no response python version bazel version no response gcc compiler version no response cuda cudnn version no response gpu model and memory no response current behaviour bug occurred today when reloading a jupyter notebook i previously tried to install cuml but the trace error seems unrelated to it but mention protocol buffers standalone code to reproduce the issue shell the error occurs when import tensorflow and consequently any other import of libraries based on tf such as import umap will fail i cannot use tf anymore relevant log output shell typeerror traceback most recent call last cell in line import tensorflow as tf import umap file home venv lib site packages tensorflow init py import sys as sys import typing as typing from tensorflow python tools import module util as module util from tensorflow python util lazy loader import lazyloader as lazyloader make sure code inside the tensorflow codebase can use enabled at import file home venv lib site packages tensorflow python init py we aim to keep this file minimal and ideally remove completely if you are adding a new file with tf export decorators import it in modules with exports py instead go tf wildcard import pylint disable wildcard import g bad import order g import not at top from tensorflow python import pywrap tensorflow as pywrap tensorflow from tensorflow python eager import context pylint enable wildcard import bring in subpackages from tensorflow python import data file home venv lib site packages tensorflow python eager context py from absl import logging import numpy as np from tensorflow core framework import function from tensorflow core protobuf import config from tensorflow core protobuf import coordination config file home venv lib site packages tensorflow core framework function py protoc insertion point imports sym db symbol database default from tensorflow core framework import attr value as tensorflow dot core dot framework dot attr value from tensorflow core framework import node def as tensorflow dot core dot framework dot node def from tensorflow core framework import op def as tensorflow dot core dot framework dot op def file home venv lib site packages tensorflow core framework attr value py protoc insertion point imports sym db symbol database default from tensorflow core framework import tensor as tensorflow dot core dot framework dot tensor from tensorflow core framework import tensor shape as tensorflow dot core dot framework dot tensor shape from tensorflow core framework import types as tensorflow dot core dot framework dot types file home venv lib site packages tensorflow core framework tensor py protoc insertion point imports sym db symbol database default from tensorflow core framework import resource handle as tensorflow dot core dot framework dot resource handle from tensorflow core framework import tensor shape as tensorflow dot core dot framework dot tensor shape from tensorflow core framework import types as tensorflow dot core dot framework dot types file home venv lib site packages tensorflow core framework resource handle py protoc insertion point imports sym db symbol database default from tensorflow core framework import tensor shape as tensorflow dot core dot framework dot tensor shape from tensorflow core framework import types as tensorflow dot core dot framework dot types descriptor descriptor filedescriptor name tensorflow core framework resource handle proto package tensorflow dependencies file home venv lib site packages tensorflow core framework tensor shape py sym db symbol database default descriptor descriptor filedescriptor name tensorflow core framework tensor shape proto package tensorflow serialized pb b n tensorflow core framework tensor shape proto ntensorflow z n n tensorflow tensorshapeproto dim n rank n n n tb n tensorflow frameworkb com tensorflow tensorflow tensorflow go core framework tensor shape go proto tensorshapeproto dim descriptor descriptor name dim full name tensorflow tensorshapeproto dim filename none file descriptor containing type none fields descriptor fielddescriptor name size full name tensorflow tensorshapeproto dim size index number type cpp type label has default value false default value message type none enum type none containing type none is extension false extension scope none serialized options none file descriptor descriptor fielddescriptor name name full name tensorflow tensorshapeproto dim name index number type cpp type label has default value false default value b decode utf message type none enum type none containing type none is extension false extension scope none serialized options none file descriptor extensions nested types enum types serialized options none is extendable false syntax extension ranges oneofs serialized start serialized end tensorshapeproto descriptor descriptor name tensorshapeproto full name tensorflow tensorshapeproto serialized end tensorshapeproto dim containing type tensorshapeproto file home venv lib site packages google protobuf descriptor py in fielddescriptor new cls name full name index number type cpp type label default value message type enum type containing type is extension extension scope options serialized options has default value containing oneof json name file create key def new cls name full name index number type cpp type label default value message type enum type containing type is extension extension scope options none serialized options none has default value true containing oneof none json name none file none create key none pylint disable redefined builtin message message checkcalledfromgeneratedfile if is extension return message default pool findextensionbyname full name typeerror descriptors cannot not be created directly if this call came from a py file your generated code is out of date and must be regenerated with protoc if you cannot immediately regenerate your protos some other possible workarounds are downgrade the protobuf package to x or lower set protocol buffers python implementation python but this will use pure python parsing and will be much slower more information
| 0
|
411,080
| 27,812,515,269
|
IssuesEvent
|
2023-03-18 09:43:18
|
caodaion/caodaion.github.io
|
https://api.github.com/repos/caodaion/caodaion.github.io
|
closed
|
DOCUMENTATION | PROJECT DOCUMENTATION | Status Report
|
documentation
|
A well-managed project always has a status report (in a properly defined template) sent out to the customer. Many project managers have told me that they do not send out a project status report because more or less everyday there is a discussion and his/her customer never asks for one.
This I would call a bad practice, irrespective of whether the customer asks for a status report or not, irrespective of the fact if you are having discussions with the customer on a daily basis, as a project manager it is your duty to provide your customer a status report – I would suggest that you send out a status report on a weekly basis.
Trust me – It has been my experience that nothing delights the customer more than a good status report on early Monday morning in his/her mailbox.
You should provide the following in a customer’s status report:
Brief summary of the project progress
Items completed last week
Items planned to be taken up next week
Snapshot of the project schedule/milestones – with upcoming milestones highlighted
Updated Project risks and their mitigation
Current project issues and their status
It takes years of experience and practice to design a good status report template that fits the requirement/project/the customer, but you can take the above points as a start.
One more point, you would have to remember is that not all projects will run smoothly, and if at a particular point in time a project turns bad, then if you have sent out a project status report that acts as documented evidence that the issues in the project have been highlighted to the customer.
|
1.0
|
DOCUMENTATION | PROJECT DOCUMENTATION | Status Report - A well-managed project always has a status report (in a properly defined template) sent out to the customer. Many project managers have told me that they do not send out a project status report because more or less everyday there is a discussion and his/her customer never asks for one.
This I would call a bad practice, irrespective of whether the customer asks for a status report or not, irrespective of the fact if you are having discussions with the customer on a daily basis, as a project manager it is your duty to provide your customer a status report – I would suggest that you send out a status report on a weekly basis.
Trust me – It has been my experience that nothing delights the customer more than a good status report on early Monday morning in his/her mailbox.
You should provide the following in a customer’s status report:
Brief summary of the project progress
Items completed last week
Items planned to be taken up next week
Snapshot of the project schedule/milestones – with upcoming milestones highlighted
Updated Project risks and their mitigation
Current project issues and their status
It takes years of experience and practice to design a good status report template that fits the requirement/project/the customer, but you can take the above points as a start.
One more point, you would have to remember is that not all projects will run smoothly, and if at a particular point in time a project turns bad, then if you have sent out a project status report that acts as documented evidence that the issues in the project have been highlighted to the customer.
|
non_process
|
documentation project documentation status report a well managed project always has a status report in a properly defined template sent out to the customer many project managers have told me that they do not send out a project status report because more or less everyday there is a discussion and his her customer never asks for one this i would call a bad practice irrespective of whether the customer asks for a status report or not irrespective of the fact if you are having discussions with the customer on a daily basis as a project manager it is your duty to provide your customer a status report – i would suggest that you send out a status report on a weekly basis trust me – it has been my experience that nothing delights the customer more than a good status report on early monday morning in his her mailbox you should provide the following in a customer’s status report brief summary of the project progress items completed last week items planned to be taken up next week snapshot of the project schedule milestones – with upcoming milestones highlighted updated project risks and their mitigation current project issues and their status it takes years of experience and practice to design a good status report template that fits the requirement project the customer but you can take the above points as a start one more point you would have to remember is that not all projects will run smoothly and if at a particular point in time a project turns bad then if you have sent out a project status report that acts as documented evidence that the issues in the project have been highlighted to the customer
| 0
|
69,966
| 15,044,475,740
|
IssuesEvent
|
2021-02-03 03:04:27
|
prafullkotecha/fabmedical
|
https://api.github.com/repos/prafullkotecha/fabmedical
|
closed
|
CVE-2020-7610 (High) detected in bson-1.1.1.tgz - autoclosed
|
security vulnerability
|
## CVE-2020-7610 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bson-1.1.1.tgz</b></p></summary>
<p>A bson parser for node.js and the browser</p>
<p>Library home page: <a href="https://registry.npmjs.org/bson/-/bson-1.1.1.tgz">https://registry.npmjs.org/bson/-/bson-1.1.1.tgz</a></p>
<p>Path to dependency file: fabmedical/content-init/package.json</p>
<p>Path to vulnerable library: fabmedical/content-init/node_modules/bson/package.json</p>
<p>
Dependency Hierarchy:
- mongoose-5.7.1.tgz (Root Library)
- :x: **bson-1.1.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/prafullkotecha/fabmedical/commit/bc40d13f4c0b341dcdb2a01ba143919c9edea9f4">bc40d13f4c0b341dcdb2a01ba143919c9edea9f4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of bson before 1.1.4 are vulnerable to Deserialization of Untrusted Data. The package will ignore an unknown value for an object's _bsotype, leading to cases where an object is serialized as a document rather than the intended BSON type.
<p>Publish Date: 2020-03-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7610>CVE-2020-7610</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/mongodb/js-bson/releases/tag/v1.1.4">https://github.com/mongodb/js-bson/releases/tag/v1.1.4</a></p>
<p>Release Date: 2020-03-30</p>
<p>Fix Resolution: bson - 1.1.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7610 (High) detected in bson-1.1.1.tgz - autoclosed - ## CVE-2020-7610 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bson-1.1.1.tgz</b></p></summary>
<p>A bson parser for node.js and the browser</p>
<p>Library home page: <a href="https://registry.npmjs.org/bson/-/bson-1.1.1.tgz">https://registry.npmjs.org/bson/-/bson-1.1.1.tgz</a></p>
<p>Path to dependency file: fabmedical/content-init/package.json</p>
<p>Path to vulnerable library: fabmedical/content-init/node_modules/bson/package.json</p>
<p>
Dependency Hierarchy:
- mongoose-5.7.1.tgz (Root Library)
- :x: **bson-1.1.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/prafullkotecha/fabmedical/commit/bc40d13f4c0b341dcdb2a01ba143919c9edea9f4">bc40d13f4c0b341dcdb2a01ba143919c9edea9f4</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of bson before 1.1.4 are vulnerable to Deserialization of Untrusted Data. The package will ignore an unknown value for an object's _bsotype, leading to cases where an object is serialized as a document rather than the intended BSON type.
<p>Publish Date: 2020-03-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7610>CVE-2020-7610</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/mongodb/js-bson/releases/tag/v1.1.4">https://github.com/mongodb/js-bson/releases/tag/v1.1.4</a></p>
<p>Release Date: 2020-03-30</p>
<p>Fix Resolution: bson - 1.1.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in bson tgz autoclosed cve high severity vulnerability vulnerable library bson tgz a bson parser for node js and the browser library home page a href path to dependency file fabmedical content init package json path to vulnerable library fabmedical content init node modules bson package json dependency hierarchy mongoose tgz root library x bson tgz vulnerable library found in head commit a href found in base branch master vulnerability details all versions of bson before are vulnerable to deserialization of untrusted data the package will ignore an unknown value for an object s bsotype leading to cases where an object is serialized as a document rather than the intended bson type publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bson step up your open source security game with whitesource
| 0
|
340,959
| 24,678,279,609
|
IssuesEvent
|
2022-10-18 18:54:48
|
nitrictech/docs
|
https://api.github.com/repos/nitrictech/docs
|
closed
|
Better env var docs
|
documentation good first issue
|
There has been a strong preference to use environment variable to store configuration data such as API keys. Let's enhance the docs to make it clear how to use environment variables for local development and how to deploy them with `nitric up`
|
1.0
|
Better env var docs - There has been a strong preference to use environment variable to store configuration data such as API keys. Let's enhance the docs to make it clear how to use environment variables for local development and how to deploy them with `nitric up`
|
non_process
|
better env var docs there has been a strong preference to use environment variable to store configuration data such as api keys let s enhance the docs to make it clear how to use environment variables for local development and how to deploy them with nitric up
| 0
|
155,301
| 13,617,924,725
|
IssuesEvent
|
2020-09-23 17:44:57
|
adobe/spectrum-css
|
https://api.github.com/repos/adobe/spectrum-css
|
closed
|
Not all checkbox state variants are documented
|
Component: Checkbox documentation sync to jira
|
## Description
The basic states are documented, but not how they could be combined (e.g. quiet and invalid and indeterminate). This means it's hard to quickly visually check/verify that downstream libraries like spectrum-web-components are rendering those states consistently with spectrum-css.
## Link to documentation
https://opensource.adobe.com/spectrum-css/components/checkbox/
## Additional context
I haven't looked at the other doc pages, but I'd guess that this is a more general issue with many components.
synced to jira: https://jira.corp.adobe.com/browse/SDS-7328
|
1.0
|
Not all checkbox state variants are documented - ## Description
The basic states are documented, but not how they could be combined (e.g. quiet and invalid and indeterminate). This means it's hard to quickly visually check/verify that downstream libraries like spectrum-web-components are rendering those states consistently with spectrum-css.
## Link to documentation
https://opensource.adobe.com/spectrum-css/components/checkbox/
## Additional context
I haven't looked at the other doc pages, but I'd guess that this is a more general issue with many components.
synced to jira: https://jira.corp.adobe.com/browse/SDS-7328
|
non_process
|
not all checkbox state variants are documented description the basic states are documented but not how they could be combined e g quiet and invalid and indeterminate this means it s hard to quickly visually check verify that downstream libraries like spectrum web components are rendering those states consistently with spectrum css link to documentation additional context i haven t looked at the other doc pages but i d guess that this is a more general issue with many components synced to jira
| 0
|
3,238
| 6,299,892,743
|
IssuesEvent
|
2017-07-21 01:04:09
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
opened
|
Members of implicit default object property are not resolved
|
bug navigation parse-tree-preprocessing
|
The default member of Workbooks is `[_Default]`, which is roughly equivalent to `Item`. Both `_Default]` and `Item` both return a `Workbook` object, so RD should be able to resolve the workbook members.
But, when the default member call is implicit, RD fails to resolve the `Workbook` properties
```vb
Debug.Print Workbooks.[_Default](ThisWorkbook.Name).ReadOnly 'ReadOnly is recognized
Debug.Print Workbooks.Item(ThisWorkbook.Name).ReadOnly 'ReadOnly is recognized
Debug.Print Workbooks(ThisWorkbook.Name).ReadOnly 'ReadOnly NOT recognized
```
|
1.0
|
Members of implicit default object property are not resolved - The default member of Workbooks is `[_Default]`, which is roughly equivalent to `Item`. Both `_Default]` and `Item` both return a `Workbook` object, so RD should be able to resolve the workbook members.
But, when the default member call is implicit, RD fails to resolve the `Workbook` properties
```vb
Debug.Print Workbooks.[_Default](ThisWorkbook.Name).ReadOnly 'ReadOnly is recognized
Debug.Print Workbooks.Item(ThisWorkbook.Name).ReadOnly 'ReadOnly is recognized
Debug.Print Workbooks(ThisWorkbook.Name).ReadOnly 'ReadOnly NOT recognized
```
|
process
|
members of implicit default object property are not resolved the default member of workbooks is which is roughly equivalent to item both default and item both return a workbook object so rd should be able to resolve the workbook members but when the default member call is implicit rd fails to resolve the workbook properties vb debug print workbooks thisworkbook name readonly readonly is recognized debug print workbooks item thisworkbook name readonly readonly is recognized debug print workbooks thisworkbook name readonly readonly not recognized
| 1
|
981
| 3,438,018,470
|
IssuesEvent
|
2015-12-13 17:45:34
|
pwittchen/ReactiveNetwork
|
https://api.github.com/repos/pwittchen/ReactiveNetwork
|
closed
|
Release 0.1.4
|
release process
|
**Initial release notes**:
- bumped RxJava dependency to v. 1.1.0
- bumped RxAndroid dependency to v. 1.1.0
- bumped Google Truth test dependency to v. 0.27
**Things to do**:
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
1.0
|
Release 0.1.4 - **Initial release notes**:
- bumped RxJava dependency to v. 1.1.0
- bumped RxAndroid dependency to v. 1.1.0
- bumped Google Truth test dependency to v. 0.27
**Things to do**:
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
process
|
release initial release notes bumped rxjava dependency to v bumped rxandroid dependency to v bumped google truth test dependency to v things to do bump library version upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md create new github release
| 1
|
15,059
| 8,750,411,042
|
IssuesEvent
|
2018-12-13 19:08:20
|
aws/aws-iot-device-sdk-embedded-C
|
https://api.github.com/repos/aws/aws-iot-device-sdk-embedded-C
|
closed
|
Feature change on how `Network::destroy` callback is called throughout the stack
|
area-mqtt improvement tenet-performance
|
In the stack the `Network::destroy` method is called after every `Network::disconnect`. This isn't a very inefficient and can lead to memory fragmentation. The destroy method only needs to be called if the intent is to tear down the stack.
I am using the mbedTLS library for all TLS. After the TLS context are setup they do not need to be destroyed if a reconnected strategy is required. All that needs to be done is a socket connection reestablished and a new handshake attempt is done.
Every-time a destroy is called memory is freed, then on reconnect that memory needs to be recreated. This is a major concern on small MCU with little RAM which need to run 24/7 without failure.
I have gotten around this by never making `Network::destroy` cleanup the TLS context is it's already initialised. I've also added some special code to `Network::connect` to do a reconnect and not reinitialise the TLS context, if already initialised.
On a side note if a client losses connection is a destroy called anywhere along the chain?
|
True
|
Feature change on how `Network::destroy` callback is called throughout the stack - In the stack the `Network::destroy` method is called after every `Network::disconnect`. This isn't a very inefficient and can lead to memory fragmentation. The destroy method only needs to be called if the intent is to tear down the stack.
I am using the mbedTLS library for all TLS. After the TLS context are setup they do not need to be destroyed if a reconnected strategy is required. All that needs to be done is a socket connection reestablished and a new handshake attempt is done.
Every-time a destroy is called memory is freed, then on reconnect that memory needs to be recreated. This is a major concern on small MCU with little RAM which need to run 24/7 without failure.
I have gotten around this by never making `Network::destroy` cleanup the TLS context is it's already initialised. I've also added some special code to `Network::connect` to do a reconnect and not reinitialise the TLS context, if already initialised.
On a side note if a client losses connection is a destroy called anywhere along the chain?
|
non_process
|
feature change on how network destroy callback is called throughout the stack in the stack the network destroy method is called after every network disconnect this isn t a very inefficient and can lead to memory fragmentation the destroy method only needs to be called if the intent is to tear down the stack i am using the mbedtls library for all tls after the tls context are setup they do not need to be destroyed if a reconnected strategy is required all that needs to be done is a socket connection reestablished and a new handshake attempt is done every time a destroy is called memory is freed then on reconnect that memory needs to be recreated this is a major concern on small mcu with little ram which need to run without failure i have gotten around this by never making network destroy cleanup the tls context is it s already initialised i ve also added some special code to network connect to do a reconnect and not reinitialise the tls context if already initialised on a side note if a client losses connection is a destroy called anywhere along the chain
| 0
|
124,391
| 17,772,544,814
|
IssuesEvent
|
2021-08-30 15:10:52
|
kapseliboi/evergreen
|
https://api.github.com/repos/kapseliboi/evergreen
|
opened
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz
|
security vulnerability
|
## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: evergreen/distribution/client/package.json</p>
<p>Path to vulnerable library: evergreen/distribution/client/node_modules/xmlhttprequest-ssl/package.json,evergreen/services/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- socket.io-client-2.1.1.tgz (Root Library)
- engine.io-client-3.2.1.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/evergreen/commit/13675096220f0e986aa94cafc5f57de6b38e38cd">13675096220f0e986aa94cafc5f57de6b38e38cd</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-h4j5-c7cj-74xg">https://github.com/advisories/GHSA-h4j5-c7cj-74xg</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: xmlhttprequest - 1.7.0,xmlhttprequest-ssl - 1.6.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28502 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - ## CVE-2020-28502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: evergreen/distribution/client/package.json</p>
<p>Path to vulnerable library: evergreen/distribution/client/node_modules/xmlhttprequest-ssl/package.json,evergreen/services/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- socket.io-client-2.1.1.tgz (Root Library)
- engine.io-client-3.2.1.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/evergreen/commit/13675096220f0e986aa94cafc5f57de6b38e38cd">13675096220f0e986aa94cafc5f57de6b38e38cd</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package xmlhttprequest before 1.7.0; all versions of package xmlhttprequest-ssl. Provided requests are sent synchronously (async=False on xhr.open), malicious user input flowing into xhr.send could result in arbitrary code being injected and run.
<p>Publish Date: 2021-03-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28502>CVE-2020-28502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-h4j5-c7cj-74xg">https://github.com/advisories/GHSA-h4j5-c7cj-74xg</a></p>
<p>Release Date: 2021-03-05</p>
<p>Fix Resolution: xmlhttprequest - 1.7.0,xmlhttprequest-ssl - 1.6.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in xmlhttprequest ssl tgz cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file evergreen distribution client package json path to vulnerable library evergreen distribution client node modules xmlhttprequest ssl package json evergreen services node modules xmlhttprequest ssl package json dependency hierarchy socket io client tgz root library engine io client tgz x xmlhttprequest ssl tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package xmlhttprequest before all versions of package xmlhttprequest ssl provided requests are sent synchronously async false on xhr open malicious user input flowing into xhr send could result in arbitrary code being injected and run publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlhttprequest xmlhttprequest ssl step up your open source security game with whitesource
| 0
|
1,099
| 3,574,326,287
|
IssuesEvent
|
2016-01-27 11:16:01
|
refugeetech/platform
|
https://api.github.com/repos/refugeetech/platform
|
closed
|
Create issues based on the mockup
|
in progress Open Process
|
We need to create project issues based on this mockup, in order to prioritize and track our efforts.
# Mockup

https://projects.invisionapp.com/share/J353G8DCW#/screens/118855239
# Definition of Done
* [ ] All aspects of the projecthub have been issued from these designs
## Child issues
Specifically, the following issues have been opened related to completing this task:
### Navigation menu
* [x] #71 - Add navigation menu to default layout
* [x] #67 - Navbar shrink state
### Translation/internationalization
* [ ] #73 - Choose and add internationalization (i18n) framework
* [ ] #72 - Add language select to navigation menu
* [ ] #109 - Language select, drop-down on hover
### Share content
* [ ] #75 - Add "Share overlay" feature and link to front page
* [ ] #76 - Add social media links to "share overlay"
### Search projects
* [x] #77 - Add search feature for Projects collection
* [x] #78 - Add search feature to front page
### Filter bar
* [ ] #68 - Create Projects collection filtering functions
* [ ] #106 - Filter bar enhancements
### Projects profile page
* [x] #80 - Create issues based on the Project profile page
## Reviewing
@brylie @mark-mcevoy reviews this description/issue increasing list of requirements and created issues that have been associated.
|
1.0
|
Create issues based on the mockup - We need to create project issues based on this mockup, in order to prioritize and track our efforts.
# Mockup

https://projects.invisionapp.com/share/J353G8DCW#/screens/118855239
# Definition of Done
* [ ] All aspects of the projecthub have been issued from these designs
## Child issues
Specifically, the following issues have been opened related to completing this task:
### Navigation menu
* [x] #71 - Add navigation menu to default layout
* [x] #67 - Navbar shrink state
### Translation/internationalization
* [ ] #73 - Choose and add internationalization (i18n) framework
* [ ] #72 - Add language select to navigation menu
* [ ] #109 - Language select, drop-down on hover
### Share content
* [ ] #75 - Add "Share overlay" feature and link to front page
* [ ] #76 - Add social media links to "share overlay"
### Search projects
* [x] #77 - Add search feature for Projects collection
* [x] #78 - Add search feature to front page
### Filter bar
* [ ] #68 - Create Projects collection filtering functions
* [ ] #106 - Filter bar enhancements
### Projects profile page
* [x] #80 - Create issues based on the Project profile page
## Reviewing
@brylie @mark-mcevoy reviews this description/issue increasing list of requirements and created issues that have been associated.
|
process
|
create issues based on the mockup we need to create project issues based on this mockup in order to prioritize and track our efforts mockup definition of done all aspects of the projecthub have been issued from these designs child issues specifically the following issues have been opened related to completing this task navigation menu add navigation menu to default layout navbar shrink state translation internationalization choose and add internationalization framework add language select to navigation menu language select drop down on hover share content add share overlay feature and link to front page add social media links to share overlay search projects add search feature for projects collection add search feature to front page filter bar create projects collection filtering functions filter bar enhancements projects profile page create issues based on the project profile page reviewing brylie mark mcevoy reviews this description issue increasing list of requirements and created issues that have been associated
| 1
|
222,627
| 7,434,609,047
|
IssuesEvent
|
2018-03-26 11:40:18
|
mozilla/addons-linter
|
https://api.github.com/repos/mozilla/addons-linter
|
reopened
|
Update the API schema with "downloads" and "downloads.open" as optional permissions
|
priority: p3 triaged
|
### Describe the problem and steps to reproduce it:
Submit an add-on that has “downloads" and "downloads.open” as optional permissions.
### What happened?
The add-on validation has failed and some errors are displayed.
### What did you expect to happen?
The "downloads" and "downloads.open" to be recognized as optional permissions at submit.
### Anything else we should know?
Reproduced on AMO -dev with FF60 on Win 7 64-bit.

|
1.0
|
Update the API schema with "downloads" and "downloads.open" as optional permissions - ### Describe the problem and steps to reproduce it:
Submit an add-on that has “downloads" and "downloads.open” as optional permissions.
### What happened?
The add-on validation has failed and some errors are displayed.
### What did you expect to happen?
The "downloads" and "downloads.open" to be recognized as optional permissions at submit.
### Anything else we should know?
Reproduced on AMO -dev with FF60 on Win 7 64-bit.

|
non_process
|
update the api schema with downloads and downloads open as optional permissions describe the problem and steps to reproduce it submit an add on that has “downloads and downloads open” as optional permissions what happened the add on validation has failed and some errors are displayed what did you expect to happen the downloads and downloads open to be recognized as optional permissions at submit anything else we should know reproduced on amo dev with on win bit
| 0
|
44,519
| 23,663,104,475
|
IssuesEvent
|
2022-08-26 17:39:15
|
modin-project/modin
|
https://api.github.com/repos/modin-project/modin
|
closed
|
PERF: `dataframe.filter` can use lazy index and columns evaluation
|
Performance 🚀
|
Place: https://github.com/modin-project/modin/blob/fbd1e2a1b91170a3c45ce9565b1051bb2d55e4eb/modin/core/dataframe/pandas/dataframe/dataframe.py#L1799
We can avoid following call: https://github.com/modin-project/modin/blob/fbd1e2a1b91170a3c45ce9565b1051bb2d55e4eb/modin/core/dataframe/pandas/dataframe/dataframe.py#L1833
|
True
|
PERF: `dataframe.filter` can use lazy index and columns evaluation - Place: https://github.com/modin-project/modin/blob/fbd1e2a1b91170a3c45ce9565b1051bb2d55e4eb/modin/core/dataframe/pandas/dataframe/dataframe.py#L1799
We can avoid following call: https://github.com/modin-project/modin/blob/fbd1e2a1b91170a3c45ce9565b1051bb2d55e4eb/modin/core/dataframe/pandas/dataframe/dataframe.py#L1833
|
non_process
|
perf dataframe filter can use lazy index and columns evaluation place we can avoid following call
| 0
|
276,569
| 30,485,999,791
|
IssuesEvent
|
2023-07-18 02:17:00
|
goharbor/harbor
|
https://api.github.com/repos/goharbor/harbor
|
closed
|
Update scan_report and vulnerability_record table
|
target/2.9.0 area/security-hub
|
For performance consideration:
Add summary information to scan_report
Extract cve_score from vendor attribute in vulnerability_record
|
True
|
Update scan_report and vulnerability_record table - For performance consideration:
Add summary information to scan_report
Extract cve_score from vendor attribute in vulnerability_record
|
non_process
|
update scan report and vulnerability record table for performance consideration add summary information to scan report extract cve score from vendor attribute in vulnerability record
| 0
|
20,846
| 27,620,283,423
|
IssuesEvent
|
2023-03-09 23:14:30
|
googleapis/common-protos-php
|
https://api.github.com/repos/googleapis/common-protos-php
|
closed
|
Dependency Dashboard
|
type: process
|
This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/phpunit-phpunit-10.x -->[chore(deps): update dependency phpunit/phpunit to v10](../pull/52)
- [ ] <!-- recreate-branch=renovate/php-8.x -->[chore(deps): update php docker tag to v8](../pull/43)
## Detected dependencies
<details><summary>composer</summary>
<blockquote>
<details><summary>composer.json</summary>
- `google/protobuf ^3.6.1`
- `phpunit/phpunit ^4.8.36||^8.5`
- `sami/sami *`
</details>
</blockquote>
</details>
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/docs.yml</summary>
- `actions/checkout v3`
- `nick-invision/retry v2`
- `php 7.4-cli`
</details>
<details><summary>.github/workflows/tests.yml</summary>
- `actions/checkout v3`
- `codecov/codecov-action v3`
- `shivammathur/setup-php v2`
- `nick-invision/retry v2`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue lists Renovate updates and detected dependencies. Read the [Dependency Dashboard](https://docs.renovatebot.com/key-concepts/dashboard/) docs to learn more.
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/phpunit-phpunit-10.x -->[chore(deps): update dependency phpunit/phpunit to v10](../pull/52)
- [ ] <!-- recreate-branch=renovate/php-8.x -->[chore(deps): update php docker tag to v8](../pull/43)
## Detected dependencies
<details><summary>composer</summary>
<blockquote>
<details><summary>composer.json</summary>
- `google/protobuf ^3.6.1`
- `phpunit/phpunit ^4.8.36||^8.5`
- `sami/sami *`
</details>
</blockquote>
</details>
<details><summary>github-actions</summary>
<blockquote>
<details><summary>.github/workflows/docs.yml</summary>
- `actions/checkout v3`
- `nick-invision/retry v2`
- `php 7.4-cli`
</details>
<details><summary>.github/workflows/tests.yml</summary>
- `actions/checkout v3`
- `codecov/codecov-action v3`
- `shivammathur/setup-php v2`
- `nick-invision/retry v2`
</details>
</blockquote>
</details>
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue lists renovate updates and detected dependencies read the docs to learn more ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull detected dependencies composer composer json google protobuf phpunit phpunit sami sami github actions github workflows docs yml actions checkout nick invision retry php cli github workflows tests yml actions checkout codecov codecov action shivammathur setup php nick invision retry check this box to trigger a request for renovate to run again on this repository
| 1
|
408,193
| 27,657,206,628
|
IssuesEvent
|
2023-03-12 04:25:50
|
uptrain-ai/uptrain
|
https://api.github.com/repos/uptrain-ai/uptrain
|
closed
|
🌎 Translate the UpTrain Readme to Russian
|
documentation good first issue help wanted
|
A translation for README.md in Russian would allow developers who prefer reading in Russian to understand what UpTrain is about quickly.
|
1.0
|
🌎 Translate the UpTrain Readme to Russian - A translation for README.md in Russian would allow developers who prefer reading in Russian to understand what UpTrain is about quickly.
|
non_process
|
🌎 translate the uptrain readme to russian a translation for readme md in russian would allow developers who prefer reading in russian to understand what uptrain is about quickly
| 0
|
3,133
| 6,189,209,593
|
IssuesEvent
|
2017-07-04 12:22:06
|
EBrown8534/StackExchangeStatisticsExplorer
|
https://api.github.com/repos/EBrown8534/StackExchangeStatisticsExplorer
|
closed
|
Almost all meta sites appear twice
|
bug data issue in process
|
*sigh*
Stack Exchange upgraded to HTTPS recently, and as such changed all meta sites API parameters from `meta.sitename` to `sitename.meta`, which means that I now have two copies of each site in the database.
Will update eventually.
|
1.0
|
Almost all meta sites appear twice - *sigh*
Stack Exchange upgraded to HTTPS recently, and as such changed all meta sites API parameters from `meta.sitename` to `sitename.meta`, which means that I now have two copies of each site in the database.
Will update eventually.
|
process
|
almost all meta sites appear twice sigh stack exchange upgraded to https recently and as such changed all meta sites api parameters from meta sitename to sitename meta which means that i now have two copies of each site in the database will update eventually
| 1
|
501,293
| 14,525,175,827
|
IssuesEvent
|
2020-12-14 12:33:39
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.youtube.com - site is not usable
|
browser-fixme ml-needsdiagnosis-false ml-probability-high priority-critical
|
<!-- @browser: google chrome -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63642 -->
**URL**: https://www.youtube.com/
**Browser / Version**: google chrome
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Sever problem is showing and it is displaying on its page that something went wrong.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/847e7c6c-5f72-4469-9a59-6ca4729f8c02.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201122152513</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/e14f60d5-f55f-4e56-9976-5a58819a982e)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.youtube.com - site is not usable - <!-- @browser: google chrome -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/63642 -->
**URL**: https://www.youtube.com/
**Browser / Version**: google chrome
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
Sever problem is showing and it is displaying on its page that something went wrong.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2020/12/847e7c6c-5f72-4469-9a59-6ca4729f8c02.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201122152513</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/12/e14f60d5-f55f-4e56-9976-5a58819a982e)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
site is not usable url browser version google chrome operating system windows tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce sever problem is showing and it is displaying on its page that something went wrong view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
133,212
| 5,199,004,135
|
IssuesEvent
|
2017-01-23 19:41:21
|
concrete5/concrete5
|
https://api.github.com/repos/concrete5/concrete5
|
closed
|
Proposal: "Command bus" for concrete5
|
priority:like to have type:rfc
|
This text is copied from a comment in a different github issue:
-----
I suggest we add a command bus to concrete5 and use commands to solve this issue
In simple terms, a command bus takes a command and sends it to a single matching handler object. It allows us to easily create tasks separate from the code that needs to be handling it. This also gives end users the ability to override common task functionality. Here's a few ideas for commands we might want in the core
- `RegisterCommand`
- `LoginCommand`
- `LogoutCommand`
- `SendEmailCommand`
- `QueueEmailCommand`
- `HandleFormCommand`
- `ClearCacheCommand`
- `RunJobCommand`
- `QueueJobCommand`
So what does a command look like?
### Package Install command in the core:
We define our install command like this:
(We can also break this interface up into individual interfaces like `SinglePageInstallCommandInterface` and `ThemeInstallCommandInterface` to make this as reusable as possible.
``` php
<?php
interface PackageInstallCommandInterface extends CommandInterface
{
public function addTheme($handle, $name, $whatever_else);
public function addSinglePage($handle, $path);
public function addEtcetera($whatever);
public function getThemes();
public function getSinglePages();
public function getEtcetera();
}
```
Now to do the installing, we register a handler:
``` php
<?php
class PackageInstallCommandServiceProvider extends Provider
{
public function register()
{
$this->app['bus']->addHandler(PackageInstallCommandInterface::class, function($command) {
foreach ($command->getSinglePages() as $single_page) {
// Install single pages
}
// install more stuff
});
}
}
```
#### So what does this look like in a package?
``` php
<?php
class SomeThing extends \Concrete\Core\Package\Package
{
public function install()
{
// Install
$package = parent::install();
$command = $package->getInstallCommand();
$this->app['bus']->handle($command);
// Get installed data out of the command...
$single_pages = $command->getInstalledSinglePages();
}
public function update()
{
parent::update();
$command = $package->getInstallCommand();
$this->app['bus']->handle($command);
// Get installed data out of the command...
$single_pages = $command->getInstalledSinglePages();
}
private function getInstallCommand()
{
$command = new PackageInstallCommand($this);
$command->addSinglePage('/dashboard/whatever', 'Some SinglePage');
return $command;
}
}
```
Thoughts?
|
1.0
|
Proposal: "Command bus" for concrete5 - This text is copied from a comment in a different github issue:
-----
I suggest we add a command bus to concrete5 and use commands to solve this issue
In simple terms, a command bus takes a command and sends it to a single matching handler object. It allows us to easily create tasks separate from the code that needs to be handling it. This also gives end users the ability to override common task functionality. Here's a few ideas for commands we might want in the core
- `RegisterCommand`
- `LoginCommand`
- `LogoutCommand`
- `SendEmailCommand`
- `QueueEmailCommand`
- `HandleFormCommand`
- `ClearCacheCommand`
- `RunJobCommand`
- `QueueJobCommand`
So what does a command look like?
### Package Install command in the core:
We define our install command like this:
(We can also break this interface up into individual interfaces like `SinglePageInstallCommandInterface` and `ThemeInstallCommandInterface` to make this as reusable as possible.
``` php
<?php
interface PackageInstallCommandInterface extends CommandInterface
{
public function addTheme($handle, $name, $whatever_else);
public function addSinglePage($handle, $path);
public function addEtcetera($whatever);
public function getThemes();
public function getSinglePages();
public function getEtcetera();
}
```
Now to do the installing, we register a handler:
``` php
<?php
class PackageInstallCommandServiceProvider extends Provider
{
public function register()
{
$this->app['bus']->addHandler(PackageInstallCommandInterface::class, function($command) {
foreach ($command->getSinglePages() as $single_page) {
// Install single pages
}
// install more stuff
});
}
}
```
#### So what does this look like in a package?
``` php
<?php
class SomeThing extends \Concrete\Core\Package\Package
{
public function install()
{
// Install
$package = parent::install();
$command = $package->getInstallCommand();
$this->app['bus']->handle($command);
// Get installed data out of the command...
$single_pages = $command->getInstalledSinglePages();
}
public function update()
{
parent::update();
$command = $package->getInstallCommand();
$this->app['bus']->handle($command);
// Get installed data out of the command...
$single_pages = $command->getInstalledSinglePages();
}
private function getInstallCommand()
{
$command = new PackageInstallCommand($this);
$command->addSinglePage('/dashboard/whatever', 'Some SinglePage');
return $command;
}
}
```
Thoughts?
|
non_process
|
proposal command bus for this text is copied from a comment in a different github issue i suggest we add a command bus to and use commands to solve this issue in simple terms a command bus takes a command and sends it to a single matching handler object it allows us to easily create tasks separate from the code that needs to be handling it this also gives end users the ability to override common task functionality here s a few ideas for commands we might want in the core registercommand logincommand logoutcommand sendemailcommand queueemailcommand handleformcommand clearcachecommand runjobcommand queuejobcommand so what does a command look like package install command in the core we define our install command like this we can also break this interface up into individual interfaces like singlepageinstallcommandinterface and themeinstallcommandinterface to make this as reusable as possible php php interface packageinstallcommandinterface extends commandinterface public function addtheme handle name whatever else public function addsinglepage handle path public function addetcetera whatever public function getthemes public function getsinglepages public function getetcetera now to do the installing we register a handler php php class packageinstallcommandserviceprovider extends provider public function register this app addhandler packageinstallcommandinterface class function command foreach command getsinglepages as single page install single pages install more stuff so what does this look like in a package php php class something extends concrete core package package public function install install package parent install command package getinstallcommand this app handle command get installed data out of the command single pages command getinstalledsinglepages public function update parent update command package getinstallcommand this app handle command get installed data out of the command single pages command getinstalledsinglepages private function getinstallcommand command new packageinstallcommand this command addsinglepage dashboard whatever some singlepage return command thoughts
| 0
|
45,419
| 11,649,257,557
|
IssuesEvent
|
2020-03-02 01:14:56
|
DynamoRIO/dynamorio
|
https://api.github.com/repos/DynamoRIO/dynamorio
|
closed
|
build failures with gcc 9.2.1: sigcontext define conflict; drgui override warning
|
Component-Build OpSys-Linux help wanted
|
dynamorio does not compile on ubuntu 19.10. There are mutually incompatible versions of sigcontext.h that are co-included. It compiles fine for me on ubuntu 18.04 and ubuntu 16.10
```
/mnt/robhenry/dynamorio/dynamorio/core/unix/include/sigcontext.h:20: error: "FP_XSTATE_MAGIC2_SIZE" redefined [-Werror]
20 | #define FP_XSTATE_MAGIC2_SIZE sizeof(FP_XSTATE_MAGIC2)
|
from /mnt/robhenry/dynamorio/dynamorio/core/drlibc/drlibc.c:43:
/usr/include/x86_64-linux-gnu/bits/sigcontext.h:29: note: this is the location of the previous definition
29 | #define FP_XSTATE_MAGIC2_SIZE sizeof (FP_XSTATE_MAGIC2)
|
```
gcc is gcc 9.2.1-9ubuntu2
|
1.0
|
build failures with gcc 9.2.1: sigcontext define conflict; drgui override warning - dynamorio does not compile on ubuntu 19.10. There are mutually incompatible versions of sigcontext.h that are co-included. It compiles fine for me on ubuntu 18.04 and ubuntu 16.10
```
/mnt/robhenry/dynamorio/dynamorio/core/unix/include/sigcontext.h:20: error: "FP_XSTATE_MAGIC2_SIZE" redefined [-Werror]
20 | #define FP_XSTATE_MAGIC2_SIZE sizeof(FP_XSTATE_MAGIC2)
|
from /mnt/robhenry/dynamorio/dynamorio/core/drlibc/drlibc.c:43:
/usr/include/x86_64-linux-gnu/bits/sigcontext.h:29: note: this is the location of the previous definition
29 | #define FP_XSTATE_MAGIC2_SIZE sizeof (FP_XSTATE_MAGIC2)
|
```
gcc is gcc 9.2.1-9ubuntu2
|
non_process
|
build failures with gcc sigcontext define conflict drgui override warning dynamorio does not compile on ubuntu there are mutually incompatible versions of sigcontext h that are co included it compiles fine for me on ubuntu and ubuntu mnt robhenry dynamorio dynamorio core unix include sigcontext h error fp xstate size redefined define fp xstate size sizeof fp xstate from mnt robhenry dynamorio dynamorio core drlibc drlibc c usr include linux gnu bits sigcontext h note this is the location of the previous definition define fp xstate size sizeof fp xstate gcc is gcc
| 0
|
231,970
| 25,556,887,474
|
IssuesEvent
|
2022-11-30 07:37:26
|
elikkatzgit/ori-tabac-dotnet-eshop2
|
https://api.github.com/repos/elikkatzgit/ori-tabac-dotnet-eshop2
|
opened
|
microsoft.entityframeworkcore.sqlserver.6.0.7.nupkg: 1 vulnerabilities (highest severity is: 9.8)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>microsoft.entityframeworkcore.sqlserver.6.0.7.nupkg</b></p></summary>
<p></p>
<p>Path to dependency file: /src/Web/Web.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.drawing.common/4.7.0/system.drawing.common.4.7.0.nupkg</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/elikkatzgit/ori-tabac-dotnet-eshop2/commit/e6db379fee9547f6b848c2f3dc15c01419e715ce">e6db379fee9547f6b848c2f3dc15c01419e715ce</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (microsoft.entityframeworkcore.sqlserver.6.0.7.nupkg version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-24112](https://www.mend.io/vulnerability-database/CVE-2021-24112) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | system.drawing.common.4.7.0.nupkg | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-24112</summary>
### Vulnerable Library - <b>system.drawing.common.4.7.0.nupkg</b></p>
<p>Provides access to GDI+ graphics functionality.
Commonly Used Types:
System.Drawing.Bitmap
System.D...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.drawing.common.4.7.0.nupkg">https://api.nuget.org/packages/system.drawing.common.4.7.0.nupkg</a></p>
<p>Path to dependency file: /src/Infrastructure/Infrastructure.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.drawing.common/4.7.0/system.drawing.common.4.7.0.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.entityframeworkcore.sqlserver.6.0.7.nupkg (Root Library)
- microsoft.data.sqlclient.2.1.4.nupkg
- system.runtime.caching.4.7.0.nupkg
- system.configuration.configurationmanager.4.7.0.nupkg
- system.security.permissions.4.7.0.nupkg
- system.windows.extensions.4.7.0.nupkg
- :x: **system.drawing.common.4.7.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/elikkatzgit/ori-tabac-dotnet-eshop2/commit/e6db379fee9547f6b848c2f3dc15c01419e715ce">e6db379fee9547f6b848c2f3dc15c01419e715ce</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
.NET Core Remote Code Execution Vulnerability This CVE ID is unique from CVE-2021-26701.
<p>Publish Date: 2021-02-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-24112>CVE-2021-24112</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-rxg9-xrhp-64gj">https://github.com/advisories/GHSA-rxg9-xrhp-64gj</a></p>
<p>Release Date: 2021-02-25</p>
<p>Fix Resolution: System.Drawing.Common - 4.7.2,5.0.3</p>
</p>
<p></p>
</details>
|
True
|
microsoft.entityframeworkcore.sqlserver.6.0.7.nupkg: 1 vulnerabilities (highest severity is: 9.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>microsoft.entityframeworkcore.sqlserver.6.0.7.nupkg</b></p></summary>
<p></p>
<p>Path to dependency file: /src/Web/Web.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.drawing.common/4.7.0/system.drawing.common.4.7.0.nupkg</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/elikkatzgit/ori-tabac-dotnet-eshop2/commit/e6db379fee9547f6b848c2f3dc15c01419e715ce">e6db379fee9547f6b848c2f3dc15c01419e715ce</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (microsoft.entityframeworkcore.sqlserver.6.0.7.nupkg version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-24112](https://www.mend.io/vulnerability-database/CVE-2021-24112) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | system.drawing.common.4.7.0.nupkg | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-24112</summary>
### Vulnerable Library - <b>system.drawing.common.4.7.0.nupkg</b></p>
<p>Provides access to GDI+ graphics functionality.
Commonly Used Types:
System.Drawing.Bitmap
System.D...</p>
<p>Library home page: <a href="https://api.nuget.org/packages/system.drawing.common.4.7.0.nupkg">https://api.nuget.org/packages/system.drawing.common.4.7.0.nupkg</a></p>
<p>Path to dependency file: /src/Infrastructure/Infrastructure.csproj</p>
<p>Path to vulnerable library: /home/wss-scanner/.nuget/packages/system.drawing.common/4.7.0/system.drawing.common.4.7.0.nupkg</p>
<p>
Dependency Hierarchy:
- microsoft.entityframeworkcore.sqlserver.6.0.7.nupkg (Root Library)
- microsoft.data.sqlclient.2.1.4.nupkg
- system.runtime.caching.4.7.0.nupkg
- system.configuration.configurationmanager.4.7.0.nupkg
- system.security.permissions.4.7.0.nupkg
- system.windows.extensions.4.7.0.nupkg
- :x: **system.drawing.common.4.7.0.nupkg** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/elikkatzgit/ori-tabac-dotnet-eshop2/commit/e6db379fee9547f6b848c2f3dc15c01419e715ce">e6db379fee9547f6b848c2f3dc15c01419e715ce</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
.NET Core Remote Code Execution Vulnerability This CVE ID is unique from CVE-2021-26701.
<p>Publish Date: 2021-02-25
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-24112>CVE-2021-24112</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-rxg9-xrhp-64gj">https://github.com/advisories/GHSA-rxg9-xrhp-64gj</a></p>
<p>Release Date: 2021-02-25</p>
<p>Fix Resolution: System.Drawing.Common - 4.7.2,5.0.3</p>
</p>
<p></p>
</details>
|
non_process
|
microsoft entityframeworkcore sqlserver nupkg vulnerabilities highest severity is vulnerable library microsoft entityframeworkcore sqlserver nupkg path to dependency file src web web csproj path to vulnerable library home wss scanner nuget packages system drawing common system drawing common nupkg found in head commit a href vulnerabilities cve severity cvss dependency type fixed in microsoft entityframeworkcore sqlserver nupkg version remediation available high system drawing common nupkg transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library system drawing common nupkg provides access to gdi graphics functionality commonly used types system drawing bitmap system d library home page a href path to dependency file src infrastructure infrastructure csproj path to vulnerable library home wss scanner nuget packages system drawing common system drawing common nupkg dependency hierarchy microsoft entityframeworkcore sqlserver nupkg root library microsoft data sqlclient nupkg system runtime caching nupkg system configuration configurationmanager nupkg system security permissions nupkg system windows extensions nupkg x system drawing common nupkg vulnerable library found in head commit a href found in base branch main vulnerability details net core remote code execution vulnerability this cve id is unique from cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution system drawing common
| 0
|
20,991
| 27,854,183,412
|
IssuesEvent
|
2023-03-20 21:14:16
|
saibrotech/mentoria
|
https://api.github.com/repos/saibrotech/mentoria
|
closed
|
Processo seletivo CosmoBots
|
processo seletivo
|
Processo Seletivo para vaga de Analista de Sistemas - Junior
- [x] Conversar com a Nathalia sobre a vaga.
- [x] Segunda etapa: Construir um Chatbot através da plataforma da CosmoBots.
- [x] Terceira etapa: Conversa com João e Mauro (equipe técnica).
- [x] Aguardar resultado do processo seletivo.
https://cosmobots.io



|
1.0
|
Processo seletivo CosmoBots - Processo Seletivo para vaga de Analista de Sistemas - Junior
- [x] Conversar com a Nathalia sobre a vaga.
- [x] Segunda etapa: Construir um Chatbot através da plataforma da CosmoBots.
- [x] Terceira etapa: Conversa com João e Mauro (equipe técnica).
- [x] Aguardar resultado do processo seletivo.
https://cosmobots.io



|
process
|
processo seletivo cosmobots processo seletivo para vaga de analista de sistemas junior conversar com a nathalia sobre a vaga segunda etapa construir um chatbot através da plataforma da cosmobots terceira etapa conversa com joão e mauro equipe técnica aguardar resultado do processo seletivo
| 1
|
226,012
| 7,497,146,108
|
IssuesEvent
|
2018-04-08 16:48:12
|
fedora-infra/bodhi
|
https://api.github.com/repos/fedora-infra/bodhi
|
opened
|
The CLI should be able to waive test failures
|
Client High priority RFE
|
The web UI has the ability to waive failed tests - we should add this ability to the CLI as well.
|
1.0
|
The CLI should be able to waive test failures - The web UI has the ability to waive failed tests - we should add this ability to the CLI as well.
|
non_process
|
the cli should be able to waive test failures the web ui has the ability to waive failed tests we should add this ability to the cli as well
| 0
|
21,726
| 30,233,661,608
|
IssuesEvent
|
2023-07-06 08:45:29
|
ukri-excalibur/excalibur-tests
|
https://api.github.com/repos/ukri-excalibur/excalibur-tests
|
closed
|
Create high-level script to run postprocessing
|
UCL postprocessing
|
This should accept some input either in command line or a config yaml/json file, and based on that it should perform the required type of analysis (see use cases in https://github.com/ukri-excalibur/excalibur-tests/issues/70#issue-1522882139).
The steps of this analysis should be something like
- read desired data from some file(s) in a path and store in a pandas dataframe (possibly collate results of various files)
- call the scripts to produce required output (plots and/or tables). The input to those scripts will be a subset of the dataframe, but its exact form cannot be decided yet, until we have some plotting scripts.
- produce info output with the dataframe column names, or a debug output of a dump of the whole dataframe
|
1.0
|
Create high-level script to run postprocessing - This should accept some input either in command line or a config yaml/json file, and based on that it should perform the required type of analysis (see use cases in https://github.com/ukri-excalibur/excalibur-tests/issues/70#issue-1522882139).
The steps of this analysis should be something like
- read desired data from some file(s) in a path and store in a pandas dataframe (possibly collate results of various files)
- call the scripts to produce required output (plots and/or tables). The input to those scripts will be a subset of the dataframe, but its exact form cannot be decided yet, until we have some plotting scripts.
- produce info output with the dataframe column names, or a debug output of a dump of the whole dataframe
|
process
|
create high level script to run postprocessing this should accept some input either in command line or a config yaml json file and based on that it should perform the required type of analysis see use cases in the steps of this analysis should be something like read desired data from some file s in a path and store in a pandas dataframe possibly collate results of various files call the scripts to produce required output plots and or tables the input to those scripts will be a subset of the dataframe but its exact form cannot be decided yet until we have some plotting scripts produce info output with the dataframe column names or a debug output of a dump of the whole dataframe
| 1
|
5,678
| 8,558,415,822
|
IssuesEvent
|
2018-11-08 18:11:32
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[FEATURE][processing] New input type for expressions
|
Automatic new feature Processing
|
Original commit: https://github.com/qgis/QGIS/commit/132e76a596c71e3d559deef19c36b414c01630ec by nyalldawson
This adds a new input type for expression inputs. Expression
inputs can be linked to a parent layer so that the builder
shows the correct fields and layer variables.
It's designed for two use cases:
1. to be used when an algorithm specifically requires an expression,
eg Select by Expression and Extract by Expression.
2. to be potentially used as a replacement input instead of string
or number literals in algorithms. Eg - if the simplify algorithm
tolerance parameter was replaced with an expression paremeter, then
this expression would be evaluated for every feature before
simplifying that feature. It would allow parameters to be calculated
per feature, as opposed to the current approach of calculating
a parameter once before running the algorithm. It would also
mean algorithms like "variable distance buffer" would no longer
be needed, as a single "buffer" algorithm could then be used
for either a fixed distance, field based, or expression based
distance.
|
1.0
|
[FEATURE][processing] New input type for expressions - Original commit: https://github.com/qgis/QGIS/commit/132e76a596c71e3d559deef19c36b414c01630ec by nyalldawson
This adds a new input type for expression inputs. Expression
inputs can be linked to a parent layer so that the builder
shows the correct fields and layer variables.
It's designed for two use cases:
1. to be used when an algorithm specifically requires an expression,
eg Select by Expression and Extract by Expression.
2. to be potentially used as a replacement input instead of string
or number literals in algorithms. Eg - if the simplify algorithm
tolerance parameter was replaced with an expression paremeter, then
this expression would be evaluated for every feature before
simplifying that feature. It would allow parameters to be calculated
per feature, as opposed to the current approach of calculating
a parameter once before running the algorithm. It would also
mean algorithms like "variable distance buffer" would no longer
be needed, as a single "buffer" algorithm could then be used
for either a fixed distance, field based, or expression based
distance.
|
process
|
new input type for expressions original commit by nyalldawson this adds a new input type for expression inputs expression inputs can be linked to a parent layer so that the builder shows the correct fields and layer variables it s designed for two use cases to be used when an algorithm specifically requires an expression eg select by expression and extract by expression to be potentially used as a replacement input instead of string or number literals in algorithms eg if the simplify algorithm tolerance parameter was replaced with an expression paremeter then this expression would be evaluated for every feature before simplifying that feature it would allow parameters to be calculated per feature as opposed to the current approach of calculating a parameter once before running the algorithm it would also mean algorithms like variable distance buffer would no longer be needed as a single buffer algorithm could then be used for either a fixed distance field based or expression based distance
| 1
|
12,109
| 14,740,465,824
|
IssuesEvent
|
2021-01-07 09:08:01
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
044 - Account Payment not showing
|
anc-process anp-0.5 ant-bug
|
In GitLab by @kdjstudios on Nov 9, 2018, 11:37
**Submitted by:** "Trawana Ervin" <trawana.ervin@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-11-09-48885/conversation
**Server:** Internal
**Client/Site:** 044
**Account:** 5238
**Issue:**
Monticello Medical Clinical PLC (044-5238) made a payment through the online portal during the billing process for Memphis and was not applied to the balance of the account. The payment displays in the account ledger, are you able to correct the issue to impact and show the correct balance?
Are there any steps that can be taken to prevent this from occurring in the future?
|
1.0
|
044 - Account Payment not showing - In GitLab by @kdjstudios on Nov 9, 2018, 11:37
**Submitted by:** "Trawana Ervin" <trawana.ervin@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-11-09-48885/conversation
**Server:** Internal
**Client/Site:** 044
**Account:** 5238
**Issue:**
Monticello Medical Clinical PLC (044-5238) made a payment through the online portal during the billing process for Memphis and was not applied to the balance of the account. The payment displays in the account ledger, are you able to correct the issue to impact and show the correct balance?
Are there any steps that can be taken to prevent this from occurring in the future?
|
process
|
account payment not showing in gitlab by kdjstudios on nov submitted by trawana ervin helpdesk server internal client site account issue monticello medical clinical plc made a payment through the online portal during the billing process for memphis and was not applied to the balance of the account the payment displays in the account ledger are you able to correct the issue to impact and show the correct balance are there any steps that can be taken to prevent this from occurring in the future
| 1
|
4,688
| 7,524,247,503
|
IssuesEvent
|
2018-04-13 06:10:31
|
ChickenKyiv/api-extended-database
|
https://api.github.com/repos/ChickenKyiv/api-extended-database
|
reopened
|
mongodb link
|
in-process
|
`mongodb://heroku_b97rxdzw:fnnua12mbtlqd5bh0i35roo0a@ds141524.mlab.com:41524/heroku_b97rxdzw`
---
You'll need to have a loopback-connector-mongo package in order to setup it well.
Sample: https://github.com/GroceriStar/groceristar/blob/master/server/datasources.json
|
1.0
|
mongodb link - `mongodb://heroku_b97rxdzw:fnnua12mbtlqd5bh0i35roo0a@ds141524.mlab.com:41524/heroku_b97rxdzw`
---
You'll need to have a loopback-connector-mongo package in order to setup it well.
Sample: https://github.com/GroceriStar/groceristar/blob/master/server/datasources.json
|
process
|
mongodb link mongodb heroku mlab com heroku you ll need to have a loopback connector mongo package in order to setup it well sample
| 1
|
288,800
| 31,930,902,756
|
IssuesEvent
|
2023-09-19 07:21:05
|
Trinadh465/linux-4.1.15_CVE-2023-4128
|
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128
|
opened
|
CVE-2018-6927 (High) detected in linuxlinux-4.6
|
Mend: dependency security vulnerability
|
## CVE-2018-6927 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The futex_requeue function in kernel/futex.c in the Linux kernel before 4.14.15 might allow attackers to cause a denial of service (integer overflow) or possibly have unspecified other impact by triggering a negative wake or requeue value.
<p>Publish Date: 2018-02-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-6927>CVE-2018-6927</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-6927">https://nvd.nist.gov/vuln/detail/CVE-2018-6927</a></p>
<p>Release Date: 2018-02-12</p>
<p>Fix Resolution: 4.14.15</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-6927 (High) detected in linuxlinux-4.6 - ## CVE-2018-6927 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The futex_requeue function in kernel/futex.c in the Linux kernel before 4.14.15 might allow attackers to cause a denial of service (integer overflow) or possibly have unspecified other impact by triggering a negative wake or requeue value.
<p>Publish Date: 2018-02-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-6927>CVE-2018-6927</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-6927">https://nvd.nist.gov/vuln/detail/CVE-2018-6927</a></p>
<p>Release Date: 2018-02-12</p>
<p>Fix Resolution: 4.14.15</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files vulnerability details the futex requeue function in kernel futex c in the linux kernel before might allow attackers to cause a denial of service integer overflow or possibly have unspecified other impact by triggering a negative wake or requeue value publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
115,842
| 17,346,941,653
|
IssuesEvent
|
2021-07-29 01:09:08
|
samq-ghdemo/Forrester-Demo
|
https://api.github.com/repos/samq-ghdemo/Forrester-Demo
|
opened
|
CVE-2021-23413 (Medium) detected in jszip-3.6.0.tgz
|
security vulnerability
|
## CVE-2021-23413 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jszip-3.6.0.tgz</b></p></summary>
<p>Create, read and edit .zip files with JavaScript http://stuartk.com/jszip</p>
<p>Library home page: <a href="https://registry.npmjs.org/jszip/-/jszip-3.6.0.tgz">https://registry.npmjs.org/jszip/-/jszip-3.6.0.tgz</a></p>
<p>Path to dependency file: Forrester-Demo/package.json</p>
<p>Path to vulnerable library: Forrester-Demo/node_modules/jszip/package.json</p>
<p>
Dependency Hierarchy:
- selenium-webdriver-3.6.0.tgz (Root Library)
- :x: **jszip-3.6.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package jszip before 3.7.0.
Crafting a new zip file with filenames set to Object prototype values (e.g __proto__, toString, etc) results in a returned object with a modified prototype instance.
<p>Publish Date: 2021-07-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23413>CVE-2021-23413</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23413">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23413</a></p>
<p>Release Date: 2021-07-25</p>
<p>Fix Resolution: jszip - 3.7.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"jszip","packageVersion":"3.6.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"selenium-webdriver:3.6.0;jszip:3.6.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jszip - 3.7.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23413","vulnerabilityDetails":"This affects the package jszip before 3.7.0.\n Crafting a new zip file with filenames set to Object prototype values (e.g __proto__, toString, etc) results in a returned object with a modified prototype instance.\r\n\r\n","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23413","cvss3Severity":"medium","cvss3Score":"6.2","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23413 (Medium) detected in jszip-3.6.0.tgz - ## CVE-2021-23413 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jszip-3.6.0.tgz</b></p></summary>
<p>Create, read and edit .zip files with JavaScript http://stuartk.com/jszip</p>
<p>Library home page: <a href="https://registry.npmjs.org/jszip/-/jszip-3.6.0.tgz">https://registry.npmjs.org/jszip/-/jszip-3.6.0.tgz</a></p>
<p>Path to dependency file: Forrester-Demo/package.json</p>
<p>Path to vulnerable library: Forrester-Demo/node_modules/jszip/package.json</p>
<p>
Dependency Hierarchy:
- selenium-webdriver-3.6.0.tgz (Root Library)
- :x: **jszip-3.6.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package jszip before 3.7.0.
Crafting a new zip file with filenames set to Object prototype values (e.g __proto__, toString, etc) results in a returned object with a modified prototype instance.
<p>Publish Date: 2021-07-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23413>CVE-2021-23413</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23413">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23413</a></p>
<p>Release Date: 2021-07-25</p>
<p>Fix Resolution: jszip - 3.7.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"jszip","packageVersion":"3.6.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"selenium-webdriver:3.6.0;jszip:3.6.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jszip - 3.7.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23413","vulnerabilityDetails":"This affects the package jszip before 3.7.0.\n Crafting a new zip file with filenames set to Object prototype values (e.g __proto__, toString, etc) results in a returned object with a modified prototype instance.\r\n\r\n","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23413","cvss3Severity":"medium","cvss3Score":"6.2","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jszip tgz cve medium severity vulnerability vulnerable library jszip tgz create read and edit zip files with javascript library home page a href path to dependency file forrester demo package json path to vulnerable library forrester demo node modules jszip package json dependency hierarchy selenium webdriver tgz root library x jszip tgz vulnerable library found in base branch master vulnerability details this affects the package jszip before crafting a new zip file with filenames set to object prototype values e g proto tostring etc results in a returned object with a modified prototype instance publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jszip isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree selenium webdriver jszip isminimumfixversionavailable true minimumfixversion jszip basebranches vulnerabilityidentifier cve vulnerabilitydetails this affects the package jszip before n crafting a new zip file with filenames set to object prototype values e g proto tostring etc results in a returned object with a modified prototype instance r n r n vulnerabilityurl
| 0
|
16,213
| 20,737,380,417
|
IssuesEvent
|
2022-03-14 14:48:21
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
Add "other" / min_frequency option to OneHotEncoder
|
module:preprocessing
|
The OneHotEncoder should have an option to summarize categories that are not frequent - or we should have another transformer to do that beforehand.
Generally having a maximum number of categories or having a minimum frequency per category would make sense as thresholds. This is similar to what we're doing in CountVectorizer but I think common enough for categorical variables that we should explicitly make it easy to do.
|
1.0
|
Add "other" / min_frequency option to OneHotEncoder - The OneHotEncoder should have an option to summarize categories that are not frequent - or we should have another transformer to do that beforehand.
Generally having a maximum number of categories or having a minimum frequency per category would make sense as thresholds. This is similar to what we're doing in CountVectorizer but I think common enough for categorical variables that we should explicitly make it easy to do.
|
process
|
add other min frequency option to onehotencoder the onehotencoder should have an option to summarize categories that are not frequent or we should have another transformer to do that beforehand generally having a maximum number of categories or having a minimum frequency per category would make sense as thresholds this is similar to what we re doing in countvectorizer but i think common enough for categorical variables that we should explicitly make it easy to do
| 1
|
17,821
| 23,744,750,606
|
IssuesEvent
|
2022-08-31 15:05:09
|
googleapis/java-compute
|
https://api.github.com/repos/googleapis/java-compute
|
closed
|
compute.v1.integration.ITSmokeInstancesTest: testCapitalLetterField failed
|
priority: p2 type: process api: compute flakybot: issue flakybot: flaky
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: da160e4798bab53d41f656c83647efc0c1883b5d
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d07ea732-cb00-4375-9eda-2701fcaf5b6b), [Sponge](http://sponge2/d07ea732-cb00-4375-9eda-2701fcaf5b6b)
status: failed
<details><summary>Test output</summary><br><pre>com.google.api.gax.rpc.InvalidArgumentException: Bad Request
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:90)
at com.google.api.gax.httpjson.HttpJsonExceptionCallable$ExceptionTransformingFuture.onFailure(HttpJsonExceptionCallable.java:106)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1074)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at com.google.api.core.AbstractApiFuture$InternalSettableFuture.setException(AbstractApiFuture.java:95)
at com.google.api.core.AbstractApiFuture.setException(AbstractApiFuture.java:77)
at com.google.api.core.SettableApiFuture.setException(SettableApiFuture.java:52)
at com.google.api.gax.httpjson.HttpRequestRunnable.run(HttpRequestRunnable.java:210)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.compute.v1.FirewallsClient.delete(FirewallsClient.java:195)
at com.google.cloud.compute.v1.FirewallsClient.delete(FirewallsClient.java:170)
at com.google.cloud.compute.v1.integration.ITSmokeInstancesTest.testCapitalLetterField(ITSmokeInstancesTest.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
Caused by: com.google.api.client.http.HttpResponseException: 400 Bad Request
DELETE https://compute.googleapis.com:443/compute/v1/projects/gcloud-devel/global/firewalls/gapic-fw-rule743a0554
{
"error": {
"code": 400,
"message": "The resource 'projects/gcloud-devel/global/firewalls/gapic-fw-rule743a0554' is not ready",
"errors": [
{
"message": "The resource 'projects/gcloud-devel/global/firewalls/gapic-fw-rule743a0554' is not ready",
"domain": "global",
"reason": "resourceNotReady"
}
]
}
}
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1116)
at com.google.api.gax.httpjson.HttpRequestRunnable.run(HttpRequestRunnable.java:191)
... 7 more
</pre></details>
|
1.0
|
compute.v1.integration.ITSmokeInstancesTest: testCapitalLetterField failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: da160e4798bab53d41f656c83647efc0c1883b5d
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d07ea732-cb00-4375-9eda-2701fcaf5b6b), [Sponge](http://sponge2/d07ea732-cb00-4375-9eda-2701fcaf5b6b)
status: failed
<details><summary>Test output</summary><br><pre>com.google.api.gax.rpc.InvalidArgumentException: Bad Request
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:90)
at com.google.api.gax.httpjson.HttpJsonExceptionCallable$ExceptionTransformingFuture.onFailure(HttpJsonExceptionCallable.java:106)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1074)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1213)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at com.google.api.core.AbstractApiFuture$InternalSettableFuture.setException(AbstractApiFuture.java:95)
at com.google.api.core.AbstractApiFuture.setException(AbstractApiFuture.java:77)
at com.google.api.core.SettableApiFuture.setException(SettableApiFuture.java:52)
at com.google.api.gax.httpjson.HttpRequestRunnable.run(HttpRequestRunnable.java:210)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.compute.v1.FirewallsClient.delete(FirewallsClient.java:195)
at com.google.cloud.compute.v1.FirewallsClient.delete(FirewallsClient.java:170)
at com.google.cloud.compute.v1.integration.ITSmokeInstancesTest.testCapitalLetterField(ITSmokeInstancesTest.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
Caused by: com.google.api.client.http.HttpResponseException: 400 Bad Request
DELETE https://compute.googleapis.com:443/compute/v1/projects/gcloud-devel/global/firewalls/gapic-fw-rule743a0554
{
"error": {
"code": 400,
"message": "The resource 'projects/gcloud-devel/global/firewalls/gapic-fw-rule743a0554' is not ready",
"errors": [
{
"message": "The resource 'projects/gcloud-devel/global/firewalls/gapic-fw-rule743a0554' is not ready",
"domain": "global",
"reason": "resourceNotReady"
}
]
}
}
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1116)
at com.google.api.gax.httpjson.HttpRequestRunnable.run(HttpRequestRunnable.java:191)
... 7 more
</pre></details>
|
process
|
compute integration itsmokeinstancestest testcapitalletterfield failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output com google api gax rpc invalidargumentexception bad request at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax httpjson httpjsonexceptioncallable exceptiontransformingfuture onfailure httpjsonexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at com google api core abstractapifuture internalsettablefuture setexception abstractapifuture java at com google api core abstractapifuture setexception abstractapifuture java at com google api core settableapifuture setexception settableapifuture java at com google api gax httpjson httprequestrunnable run httprequestrunnable java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java suppressed com google api gax rpc asynctaskexception asynchronous task failed at com google api gax rpc apiexceptions callandtranslateapiexception apiexceptions java at com google api gax rpc unarycallable call unarycallable java at com google cloud compute firewallsclient delete firewallsclient java at com google cloud compute firewallsclient delete firewallsclient java at com google cloud compute integration itsmokeinstancestest testcapitalletterfield itsmokeinstancestest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executeeager junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by com google api client http httpresponseexception bad request delete error code message the resource projects gcloud devel global firewalls gapic fw is not ready errors message the resource projects gcloud devel global firewalls gapic fw is not ready domain global reason resourcenotready at com google api client http httprequest execute httprequest java at com google api gax httpjson httprequestrunnable run httprequestrunnable java more
| 1
|
6,836
| 15,437,212,987
|
IssuesEvent
|
2021-03-07 15:54:07
|
tldr-pages/tldr
|
https://api.github.com/repos/tldr-pages/tldr
|
closed
|
Removing pages.pt_PT
|
architecture decision mass changes page edit
|
I would like to propose to merge `pages.pt_PT` into `pages.pt_BR` and the remove `pages.pt_PT`, and then rename `pages.pt_BR` → `pages.pt` (maybe not rename it though, I'm not sure what effect that might have on clients 🤔).
My reasoning is as follows:
- Notice the difference of `common/xkill.md`, for example:
```diff
> Termina o cliente associado a um elemento gráfico.
> Utilizado para forçar a terminação de processos que não respondem ou não apresentam botão "fechar".
-- Ativar um cursor para fechar uma janela com o clique do botão esquerdo do rato (pressionar qualquer outro botão para
cancelar):
+- Ativar um cursor para fechar uma janela com o clique do botão esquerdo do mouse (pressionar qualquer outro botão
para cancelar):
`xkill`
```
- In `touch.md`, for example, every line is different between the two translations, but the only different is that `pt_PT` uses the noun `ficheiro` while `pt_BR` uses `arquivo`. (`file` vs. `archive`?)
- Besides those seemingly minor differences (which may just be wording usages on the part of the translator, not even necessarily vocabulary differences between the two dialects), there are **151** `pt_BR` pages and **9** `pt_PT` pages.
- There are other such dialectic differences with other languages, e.g. American English vs. British English, but we don't have a `pages.en_US` and `pages.en_GB`.
|
1.0
|
Removing pages.pt_PT - I would like to propose to merge `pages.pt_PT` into `pages.pt_BR` and the remove `pages.pt_PT`, and then rename `pages.pt_BR` → `pages.pt` (maybe not rename it though, I'm not sure what effect that might have on clients 🤔).
My reasoning is as follows:
- Notice the difference of `common/xkill.md`, for example:
```diff
> Termina o cliente associado a um elemento gráfico.
> Utilizado para forçar a terminação de processos que não respondem ou não apresentam botão "fechar".
-- Ativar um cursor para fechar uma janela com o clique do botão esquerdo do rato (pressionar qualquer outro botão para
cancelar):
+- Ativar um cursor para fechar uma janela com o clique do botão esquerdo do mouse (pressionar qualquer outro botão
para cancelar):
`xkill`
```
- In `touch.md`, for example, every line is different between the two translations, but the only different is that `pt_PT` uses the noun `ficheiro` while `pt_BR` uses `arquivo`. (`file` vs. `archive`?)
- Besides those seemingly minor differences (which may just be wording usages on the part of the translator, not even necessarily vocabulary differences between the two dialects), there are **151** `pt_BR` pages and **9** `pt_PT` pages.
- There are other such dialectic differences with other languages, e.g. American English vs. British English, but we don't have a `pages.en_US` and `pages.en_GB`.
|
non_process
|
removing pages pt pt i would like to propose to merge pages pt pt into pages pt br and the remove pages pt pt and then rename pages pt br → pages pt maybe not rename it though i m not sure what effect that might have on clients 🤔 my reasoning is as follows notice the difference of common xkill md for example diff termina o cliente associado a um elemento gráfico utilizado para forçar a terminação de processos que não respondem ou não apresentam botão fechar ativar um cursor para fechar uma janela com o clique do botão esquerdo do rato pressionar qualquer outro botão para cancelar ativar um cursor para fechar uma janela com o clique do botão esquerdo do mouse pressionar qualquer outro botão para cancelar xkill in touch md for example every line is different between the two translations but the only different is that pt pt uses the noun ficheiro while pt br uses arquivo file vs archive besides those seemingly minor differences which may just be wording usages on the part of the translator not even necessarily vocabulary differences between the two dialects there are pt br pages and pt pt pages there are other such dialectic differences with other languages e g american english vs british english but we don t have a pages en us and pages en gb
| 0
|
9,052
| 12,130,108,073
|
IssuesEvent
|
2020-04-23 00:30:41
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
remove gcp-devrel-py-tools from appengine/standard/mailgun/requirements-test.txt
|
priority: p2 remove-gcp-devrel-py-tools type: process
|
remove gcp-devrel-py-tools from appengine/standard/mailgun/requirements-test.txt
|
1.0
|
remove gcp-devrel-py-tools from appengine/standard/mailgun/requirements-test.txt - remove gcp-devrel-py-tools from appengine/standard/mailgun/requirements-test.txt
|
process
|
remove gcp devrel py tools from appengine standard mailgun requirements test txt remove gcp devrel py tools from appengine standard mailgun requirements test txt
| 1
|
174,298
| 6,538,910,193
|
IssuesEvent
|
2017-09-01 08:49:18
|
edenlabllc/ehealth.api
|
https://api.github.com/repos/edenlabllc/ehealth.api
|
opened
|
EP: Update Innm
|
epic/medication kind/task priority/medium project/reimbursement
|
Update Innm entity WS according to requirements
Requirements: [вимоги](https://docs.google.com/document/d/15qqew84E9PdDb0LS6mhCagOLqcnNa6wmxenR5kHrUPg/edit?usp=sharing)
- [x] Update WS [spec](https://edenlab.atlassian.net/wiki/spaces/EH/pages/12976141/Update+Innm)
- [ ] Implement
- [ ] Deploy
- [ ] Improve test scenarios
|
1.0
|
EP: Update Innm - Update Innm entity WS according to requirements
Requirements: [вимоги](https://docs.google.com/document/d/15qqew84E9PdDb0LS6mhCagOLqcnNa6wmxenR5kHrUPg/edit?usp=sharing)
- [x] Update WS [spec](https://edenlab.atlassian.net/wiki/spaces/EH/pages/12976141/Update+Innm)
- [ ] Implement
- [ ] Deploy
- [ ] Improve test scenarios
|
non_process
|
ep update innm update innm entity ws according to requirements requirements update ws implement deploy improve test scenarios
| 0
|
11,750
| 14,589,484,903
|
IssuesEvent
|
2020-12-19 02:12:35
|
ZhaJiMan/ZhaJiMan.github.io
|
https://api.github.com/repos/ZhaJiMan/ZhaJiMan.github.io
|
opened
|
在 Python 用进程池搞并行 | 炸 鸡 人
|
/post/multiprocessing/ Gitalk
|
https://zhajiman.github.io/post/multiprocessing/
炸鸡是指以油炸方式烹调的鸡肉。炸鸡有很多不同的油炸种类,例如原件连皮连骨的鸡件,或者已去皮去骨的鸡肉块。不同国家和地区的炸鸡,均有其独特的特色。
|
1.0
|
在 Python 用进程池搞并行 | 炸 鸡 人 - https://zhajiman.github.io/post/multiprocessing/
炸鸡是指以油炸方式烹调的鸡肉。炸鸡有很多不同的油炸种类,例如原件连皮连骨的鸡件,或者已去皮去骨的鸡肉块。不同国家和地区的炸鸡,均有其独特的特色。
|
process
|
在 python 用进程池搞并行 炸 鸡 人 炸鸡是指以油炸方式烹调的鸡肉。炸鸡有很多不同的油炸种类,例如原件连皮连骨的鸡件,或者已去皮去骨的鸡肉块。不同国家和地区的炸鸡,均有其独特的特色。
| 1
|
11,228
| 14,005,321,717
|
IssuesEvent
|
2020-10-28 18:17:34
|
kubernetes/minikube
|
https://api.github.com/repos/kubernetes/minikube
|
closed
|
automate pushing kicbase image from snapshot to stable after release
|
kind/process priority/important-soon
|
before release each time we build the kic base image we add the snashotX to it
for example
https://github.com/medyagh/minikube/blob/32922f4184c31fb405fb8dab7a1d7a8ef120b47a/pkg/drivers/kic/types.go#L27
Version = "v0.0.13-snapshot1"
this is good for when multiple people make corrections before a release, and once it is finalized, before the release we do this
```
docker tag kicbase/stable:v0.0.13-snapshot1 kicbase/stable:v0.0.13
docker tag kicbase/stable:v0.0.13 gcr.io/k8s-minikube/kicbase:v0.0.13
docker tag kicbase/stable:v0.0.13 docker.pkg.github.com/kubernetes/minikube/kicbase:v0.0.13
docker push kicbase/stable:v0.0.13
docker push gcr.io/k8s-minikube/kicbase:v0.0.13
docker push docker.pkg.github.com/kubernetes/minikube/kicbase:v0.0.13
```
and change the types.go in kic package
https://github.com/medyagh/minikube/blob/32922f4184c31fb405fb8dab7a1d7a8ef120b47a/pkg/drivers/kic/types.go#L27
|
1.0
|
automate pushing kicbase image from snapshot to stable after release - before release each time we build the kic base image we add the snashotX to it
for example
https://github.com/medyagh/minikube/blob/32922f4184c31fb405fb8dab7a1d7a8ef120b47a/pkg/drivers/kic/types.go#L27
Version = "v0.0.13-snapshot1"
this is good for when multiple people make corrections before a release, and once it is finalized, before the release we do this
```
docker tag kicbase/stable:v0.0.13-snapshot1 kicbase/stable:v0.0.13
docker tag kicbase/stable:v0.0.13 gcr.io/k8s-minikube/kicbase:v0.0.13
docker tag kicbase/stable:v0.0.13 docker.pkg.github.com/kubernetes/minikube/kicbase:v0.0.13
docker push kicbase/stable:v0.0.13
docker push gcr.io/k8s-minikube/kicbase:v0.0.13
docker push docker.pkg.github.com/kubernetes/minikube/kicbase:v0.0.13
```
and change the types.go in kic package
https://github.com/medyagh/minikube/blob/32922f4184c31fb405fb8dab7a1d7a8ef120b47a/pkg/drivers/kic/types.go#L27
|
process
|
automate pushing kicbase image from snapshot to stable after release before release each time we build the kic base image we add the snashotx to it for example version this is good for when multiple people make corrections before a release and once it is finalized before the release we do this docker tag kicbase stable kicbase stable docker tag kicbase stable gcr io minikube kicbase docker tag kicbase stable docker pkg github com kubernetes minikube kicbase docker push kicbase stable docker push gcr io minikube kicbase docker push docker pkg github com kubernetes minikube kicbase and change the types go in kic package
| 1
|
19,435
| 25,705,865,597
|
IssuesEvent
|
2022-12-07 00:36:02
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
Analista de Sistemas SR na [QUALIDADOS]
|
SALVADOR DESENVOLVIMENTO DE SOFTWARE C# ASP.NET VB.NET INGLÊS MSSQL GERÊNCIA DE PROJETO MODELAGEM DE PROCESSOS ERP ARTISAN Stale
|
<!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Analista de Sistemas SR
Atuar no desenvolvimento e/ou manutenção de softwares, fazendo a analise de processos, verificando alternativas e viabilidades para as soluções, interagindo com o cliente interno.
**Perfil**
> Gestão de projetos, liderança de equipe, planejamento e controle e tomada de decisão
**Formação Superior**
> Sistemas da Informação, Tec. em Analise e Desenvolvimento de Sistemas, Ciência da Computação
## Local
Salvador - BAHIA
## Benefícios
- Plano de saúde
- Seguro de vida
- VR
- Transporte
## Requisitos
**Obrigatórios:**
- Experiencia em desenvoilvimenbto de software do inicio ao fim do projeto
- Vivencia com ERP
- C# Avançado
- ASP.Net
- VB.Net
- SQL Server
- Inglês Técnico
**Desejáveis:**
- Pós em gestão de projetos
## QUALIDADOS
Com sede na Bahia, a Qualidados realiza, atualmente, projetos em todas as regiões do Brasil. Com uma equipe integrada e formada por engenheiros, técnicos, analistas e consultores de alta qualificação técnica e constante espírito de atualização, estamos prontos para atender projetos em qualquer localidade no cenário nacional (onshore e offshore).
## Como se candidatar
Interessados, enviar CV com o nome da função no assunto para curriculo@qualidados.com.br
|
1.0
|
Analista de Sistemas SR na [QUALIDADOS] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Analista de Sistemas SR
Atuar no desenvolvimento e/ou manutenção de softwares, fazendo a analise de processos, verificando alternativas e viabilidades para as soluções, interagindo com o cliente interno.
**Perfil**
> Gestão de projetos, liderança de equipe, planejamento e controle e tomada de decisão
**Formação Superior**
> Sistemas da Informação, Tec. em Analise e Desenvolvimento de Sistemas, Ciência da Computação
## Local
Salvador - BAHIA
## Benefícios
- Plano de saúde
- Seguro de vida
- VR
- Transporte
## Requisitos
**Obrigatórios:**
- Experiencia em desenvoilvimenbto de software do inicio ao fim do projeto
- Vivencia com ERP
- C# Avançado
- ASP.Net
- VB.Net
- SQL Server
- Inglês Técnico
**Desejáveis:**
- Pós em gestão de projetos
## QUALIDADOS
Com sede na Bahia, a Qualidados realiza, atualmente, projetos em todas as regiões do Brasil. Com uma equipe integrada e formada por engenheiros, técnicos, analistas e consultores de alta qualificação técnica e constante espírito de atualização, estamos prontos para atender projetos em qualquer localidade no cenário nacional (onshore e offshore).
## Como se candidatar
Interessados, enviar CV com o nome da função no assunto para curriculo@qualidados.com.br
|
process
|
analista de sistemas sr na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na analista de sistemas sr atuar no desenvolvimento e ou manutenção de softwares fazendo a analise de processos verificando alternativas e viabilidades para as soluções interagindo com o cliente interno perfil gestão de projetos liderança de equipe planejamento e controle e tomada de decisão formação superior sistemas da informação tec em analise e desenvolvimento de sistemas ciência da computação local salvador bahia benefícios plano de saúde seguro de vida vr transporte requisitos obrigatórios experiencia em desenvoilvimenbto de software do inicio ao fim do projeto vivencia com erp c avançado asp net vb net sql server inglês técnico desejáveis pós em gestão de projetos qualidados com sede na bahia a qualidados realiza atualmente projetos em todas as regiões do brasil com uma equipe integrada e formada por engenheiros técnicos analistas e consultores de alta qualificação técnica e constante espírito de atualização estamos prontos para atender projetos em qualquer localidade no cenário nacional onshore e offshore como se candidatar interessados enviar cv com o nome da função no assunto para curriculo qualidados com br
| 1
|
17,473
| 23,298,382,275
|
IssuesEvent
|
2022-08-07 00:02:35
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Locking StreamReader processes on EndOfStream
|
area-System.Diagnostics.Process no-recent-activity needs-author-action
|
**NET Core version** : 3.1.10-pre1
We start child process and for this process we read STDERR and STDOUT streams.
We faced with the problem, that sometimes child process does not finish and we suspect that our call of StreamReader `EndOfStream` locks.
We found the doc: https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.process.standardoutput?view=net-6.0#remarks
But there described the case when program tries to read output stream asynchronously while child process tries to read input stream.
In our case, we write everything through the input stream and then close it. During further execution we only read output streams with the following code: https://github.com/microsoft/azure-pipelines-agent/blob/ac0dbcdb53cbcd77133641cbd39e00ca0e4d93d1/src/Agent.Sdk/ProcessInvoker.cs#L483-L508.
We call the function that contains code above twice for two readers (error and output streams).
Logs show correct execution of child process with all lines from output and error streams, but when child process should finish, it doesn't exit.
We suspect that streams are locked, that on line `while (!reader.EndOfStream)`, that doesn't allow child process to exit.
Could you please suggest why lock on EndOfStream might occur? Could it be some known issue?
And could you please advise how we can avoid it?
|
1.0
|
Locking StreamReader processes on EndOfStream - **NET Core version** : 3.1.10-pre1
We start child process and for this process we read STDERR and STDOUT streams.
We faced with the problem, that sometimes child process does not finish and we suspect that our call of StreamReader `EndOfStream` locks.
We found the doc: https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.process.standardoutput?view=net-6.0#remarks
But there described the case when program tries to read output stream asynchronously while child process tries to read input stream.
In our case, we write everything through the input stream and then close it. During further execution we only read output streams with the following code: https://github.com/microsoft/azure-pipelines-agent/blob/ac0dbcdb53cbcd77133641cbd39e00ca0e4d93d1/src/Agent.Sdk/ProcessInvoker.cs#L483-L508.
We call the function that contains code above twice for two readers (error and output streams).
Logs show correct execution of child process with all lines from output and error streams, but when child process should finish, it doesn't exit.
We suspect that streams are locked, that on line `while (!reader.EndOfStream)`, that doesn't allow child process to exit.
Could you please suggest why lock on EndOfStream might occur? Could it be some known issue?
And could you please advise how we can avoid it?
|
process
|
locking streamreader processes on endofstream net core version we start child process and for this process we read stderr and stdout streams we faced with the problem that sometimes child process does not finish and we suspect that our call of streamreader endofstream locks we found the doc but there described the case when program tries to read output stream asynchronously while child process tries to read input stream in our case we write everything through the input stream and then close it during further execution we only read output streams with the following code we call the function that contains code above twice for two readers error and output streams logs show correct execution of child process with all lines from output and error streams but when child process should finish it doesn t exit we suspect that streams are locked that on line while reader endofstream that doesn t allow child process to exit could you please suggest why lock on endofstream might occur could it be some known issue and could you please advise how we can avoid it
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.