Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
34,768
| 2,787,473,276
|
IssuesEvent
|
2015-05-08 06:13:58
|
CheckiO/checkio-empire-battle
|
https://api.github.com/repos/CheckiO/checkio-empire-battle
|
closed
|
Add buildings and obstacles on the map
|
complex:middle priority:high
|
Add a new type for initial data "building"
```Json
{
"type": "building",
"name": "library",
"player_id": 0,
"health": 1000,
"coordinates": [3, 3],
"size": 0.5,
}
{
"type": "obstacles",
"health": 10000,
"coordinates": [3, 3],
"size": 0.5,
}
```
|
1.0
|
Add buildings and obstacles on the map - Add a new type for initial data "building"
```Json
{
"type": "building",
"name": "library",
"player_id": 0,
"health": 1000,
"coordinates": [3, 3],
"size": 0.5,
}
{
"type": "obstacles",
"health": 10000,
"coordinates": [3, 3],
"size": 0.5,
}
```
|
non_process
|
add buildings and obstacles on the map add a new type for initial data building json type building name library player id health coordinates size type obstacles health coordinates size
| 0
|
787
| 3,271,023,265
|
IssuesEvent
|
2015-10-24 03:13:43
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
closed
|
scale the sx/sy/tx/ty parameters in the feedback module so that they aren't jumpy
|
bug video processing
|
the range is way too big so any attempt to edit it with a mouse results in a big shift
|
1.0
|
scale the sx/sy/tx/ty parameters in the feedback module so that they aren't jumpy - the range is way too big so any attempt to edit it with a mouse results in a big shift
|
process
|
scale the sx sy tx ty parameters in the feedback module so that they aren t jumpy the range is way too big so any attempt to edit it with a mouse results in a big shift
| 1
|
6,970
| 10,120,390,718
|
IssuesEvent
|
2019-07-31 13:39:04
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Can Hybrid Workers use Group Managed Service Accounts?
|
Pri2 assigned-to-author automation/svc process-automation/subsvc product-question triaged
|
This page says the Hybrid Worker runs as Local System, but can it be configured to instead run as a Group Managed Service Account? (https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview)
The guidance on this page is to use Credential Assets, which are super convenient but less secure.
Are there known issues when using GMSAs with Hybrid Workers?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run runbooks on Azure Automation Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks#feedback)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
1.0
|
Can Hybrid Workers use Group Managed Service Accounts? - This page says the Hybrid Worker runs as Local System, but can it be configured to instead run as a Group Managed Service Account? (https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview)
The guidance on this page is to use Credential Assets, which are super convenient but less secure.
Are there known issues when using GMSAs with Hybrid Workers?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: a21ca143-2f33-5cea-94a8-ace7e9de5f9c
* Version Independent ID: d7f2ef01-8c25-770e-dfd9-37b98dc7ba29
* Content: [Run runbooks on Azure Automation Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-hrw-run-runbooks#feedback)
* Content Source: [articles/automation/automation-hrw-run-runbooks.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-hrw-run-runbooks.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
process
|
can hybrid workers use group managed service accounts this page says the hybrid worker runs as local system but can it be configured to instead run as a group managed service account the guidance on this page is to use credential assets which are super convenient but less secure are there known issues when using gmsas with hybrid workers document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
| 1
|
119,238
| 10,035,360,631
|
IssuesEvent
|
2019-07-18 08:09:37
|
imixs/imixs-workflow
|
https://api.github.com/repos/imixs/imixs-workflow
|
closed
|
FileData - name should not be changed by setter
|
bug testing
|
the setName method should not change the given name
Currently / is a identifier for a sub string. which is wrong!
|
1.0
|
FileData - name should not be changed by setter - the setName method should not change the given name
Currently / is a identifier for a sub string. which is wrong!
|
non_process
|
filedata name should not be changed by setter the setname method should not change the given name currently is a identifier for a sub string which is wrong
| 0
|
5,806
| 8,643,540,999
|
IssuesEvent
|
2018-11-25 18:55:19
|
gfrebello/qs-trip-planning-procedure
|
https://api.github.com/repos/gfrebello/qs-trip-planning-procedure
|
closed
|
Code front end to show summary of trip planning
|
Priority:Very High Process:Implement Requirement
|
Create a page showing user's trip planning summary and a button to confirm it.
|
1.0
|
Code front end to show summary of trip planning - Create a page showing user's trip planning summary and a button to confirm it.
|
process
|
code front end to show summary of trip planning create a page showing user s trip planning summary and a button to confirm it
| 1
|
203,497
| 15,883,518,768
|
IssuesEvent
|
2021-04-09 17:30:16
|
MICA-MNI/BrainStat
|
https://api.github.com/repos/MICA-MNI/BrainStat
|
closed
|
Adding comments to SurfStat MATLAB functions
|
Documentation MATLAB
|
SurfStat's in-line comments are a little lacking so lets fix that. Functions that have been ported from SurfStat to BrainStat are in matlab/surfstat_ported. Please PR the comments to my branch so we have a good overview of all comment additions before merging them to master.
When you've added comments to a function, please list it here.
|
1.0
|
Adding comments to SurfStat MATLAB functions - SurfStat's in-line comments are a little lacking so lets fix that. Functions that have been ported from SurfStat to BrainStat are in matlab/surfstat_ported. Please PR the comments to my branch so we have a good overview of all comment additions before merging them to master.
When you've added comments to a function, please list it here.
|
non_process
|
adding comments to surfstat matlab functions surfstat s in line comments are a little lacking so lets fix that functions that have been ported from surfstat to brainstat are in matlab surfstat ported please pr the comments to my branch so we have a good overview of all comment additions before merging them to master when you ve added comments to a function please list it here
| 0
|
22,192
| 30,747,109,178
|
IssuesEvent
|
2023-07-28 15:52:08
|
gsoft-inc/ov-igloo-ui
|
https://api.github.com/repos/gsoft-inc/ov-igloo-ui
|
closed
|
[Feature Request]: Add a prop onAfterClose on the ActionMenu
|
in backlog in process
|
### Component that this feature request involves
ActionMenu
### Is your feature request related to a problem? Please describe
I need to remove a css class onClose of the ActionMenu, but it is not visually nice because of the animation (the onClose is triggered before the animation is completed). With a prop onAfterClose triggered after the animation is completed, it would be perfect for my case! Now I use a settimeout and the timeout value is 150 because I checked in the ds code to see what it is. So if you change it, my value will not be good anymore. The best would be a onAfterClose prop! Thanks!
### Describe the solution you'd like
Add a prop onAfterClose (or the naming that you want) on the ActionMenu that will be triggered after the animation is completed.
### Describe alternatives you've considered
As mentionned before, I used a settimeout, but it is not a good solution in my opinion.
### Additional context
_No response_
|
1.0
|
[Feature Request]: Add a prop onAfterClose on the ActionMenu - ### Component that this feature request involves
ActionMenu
### Is your feature request related to a problem? Please describe
I need to remove a css class onClose of the ActionMenu, but it is not visually nice because of the animation (the onClose is triggered before the animation is completed). With a prop onAfterClose triggered after the animation is completed, it would be perfect for my case! Now I use a settimeout and the timeout value is 150 because I checked in the ds code to see what it is. So if you change it, my value will not be good anymore. The best would be a onAfterClose prop! Thanks!
### Describe the solution you'd like
Add a prop onAfterClose (or the naming that you want) on the ActionMenu that will be triggered after the animation is completed.
### Describe alternatives you've considered
As mentionned before, I used a settimeout, but it is not a good solution in my opinion.
### Additional context
_No response_
|
process
|
add a prop onafterclose on the actionmenu component that this feature request involves actionmenu is your feature request related to a problem please describe i need to remove a css class onclose of the actionmenu but it is not visually nice because of the animation the onclose is triggered before the animation is completed with a prop onafterclose triggered after the animation is completed it would be perfect for my case now i use a settimeout and the timeout value is because i checked in the ds code to see what it is so if you change it my value will not be good anymore the best would be a onafterclose prop thanks describe the solution you d like add a prop onafterclose or the naming that you want on the actionmenu that will be triggered after the animation is completed describe alternatives you ve considered as mentionned before i used a settimeout but it is not a good solution in my opinion additional context no response
| 1
|
19,685
| 26,034,187,877
|
IssuesEvent
|
2022-12-22 02:00:08
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Thu, 22 Dec 22
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Privacy-Protecting Behaviours of Risk Detection in People with Dementia using Videos
- **Authors:** Pratik K. Mishra, Andrea Iaboni, Bing Ye, Kristine Newman, Alex Mihailidis, Shehroz S. Khan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.10682
- **Pdf link:** https://arxiv.org/pdf/2212.10682
- **Abstract**
People living with dementia often exhibit behavioural and psychological symptoms of dementia that can put their and others' safety at risk. Existing video surveillance systems in long-term care facilities can be used to monitor such behaviours of risk to alert the staff to prevent potential injuries or death in some cases. However, these behaviours of risk events are heterogeneous and infrequent in comparison to normal events. Moreover, analyzing raw videos can also raise privacy concerns. In this paper, we present two novel privacy-protecting video-based anomaly detection approaches to detect behaviours of risks in people with dementia. We either extracted body pose information as skeletons and use semantic segmentation masks to replace multiple humans in the scene with their semantic boundaries. Our work differs from most existing approaches for video anomaly detection that focus on appearance-based features, which can put the privacy of a person at risk and is also susceptible to pixel-based noise, including illumination and viewing direction. We used anonymized videos of normal activities to train customized spatio-temporal convolutional autoencoders and identify behaviours of risk as anomalies. We show our results on a real-world study conducted in a dementia care unit with patients with dementia, containing approximately 21 hours of normal activities data for training and 9 hours of data containing normal and behaviours of risk events for testing. We compared our approaches with the original RGB videos and obtained an equivalent area under the receiver operating characteristic curve performance of 0.807 for the skeleton-based approach and 0.823 for the segmentation mask-based approach. This is one of the first studies to incorporate privacy for the detection of behaviours of risks in people with dementia.
### Deep set conditioned latent representations for action recognition
- **Authors:** Akash Singh, Tom De Schepper, Kevin Mets, Peter Hellinckx, Jose Oramas, Steven Latre
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.11030
- **Pdf link:** https://arxiv.org/pdf/2212.11030
- **Abstract**
In recent years multi-label, multi-class video action recognition has gained significant popularity. While reasoning over temporally connected atomic actions is mundane for intelligent species, standard artificial neural networks (ANN) still struggle to classify them. In the real world, atomic actions often temporally connect to form more complex composite actions. The challenge lies in recognising composite action of varying durations while other distinct composite or atomic actions occur in the background. Drawing upon the success of relational networks, we propose methods that learn to reason over the semantic concept of objects and actions. We empirically show how ANNs benefit from pretraining, relational inductive biases and unordered set-based latent representations. In this paper we propose deep set conditioned I3D (SCI3D), a two stream relational network that employs latent representation of state and visual representation for reasoning over events and actions. They learn to reason about temporally connected actions in order to identify all of them in the video. The proposed method achieves an improvement of around 1.49% mAP in atomic action recognition and 17.57% mAP in composite action recognition, over a I3D-NL baseline, on the CATER dataset.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### High-Throughput, High-Performance Deep Learning-Driven Light Guide Plate Surface Visual Quality Inspection Tailored for Real-World Manufacturing Environments
- **Authors:** Carol Xu, Mahmoud Famouri, Gautam Bathla, Mohammad Javad Shafiee, Alexander Wong
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2212.10632
- **Pdf link:** https://arxiv.org/pdf/2212.10632
- **Abstract**
Light guide plates are essential optical components widely used in a diverse range of applications ranging from medical lighting fixtures to back-lit TV displays. In this work, we introduce a fully-integrated, high-throughput, high-performance deep learning-driven workflow for light guide plate surface visual quality inspection (VQI) tailored for real-world manufacturing environments. To enable automated VQI on the edge computing within the fully-integrated VQI system, a highly compact deep anti-aliased attention condenser neural network (which we name LightDefectNet) tailored specifically for light guide plate surface defect detection in resource-constrained scenarios was created via machine-driven design exploration with computational and "best-practices" constraints as well as L_1 paired classification discrepancy loss. Experiments show that LightDetectNet achieves a detection accuracy of ~98.2% on the LGPSDD benchmark while having just 770K parameters (~33X and ~6.9X lower than ResNet-50 and EfficientNet-B0, respectively) and ~93M FLOPs (~88X and ~8.4X lower than ResNet-50 and EfficientNet-B0, respectively) and ~8.8X faster inference speed than EfficientNet-B0 on an embedded ARM processor. As such, the proposed deep learning-driven workflow, integrated with the aforementioned LightDefectNet neural network, is highly suited for high-throughput, high-performance light plate surface VQI within real-world manufacturing environments.
### Diamond Abrasive Electroplated Surface Anomaly Detection using Convolutional Neural Networks for Industrial Quality Inspection
- **Authors:** Parviz Ali
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.11122
- **Pdf link:** https://arxiv.org/pdf/2212.11122
- **Abstract**
Electroplated diamond abrasive tools require nickel coating on a metal surface for abrasive bonding and part functionality. The electroplated nickel-coated abrasive tool is expected to have a high-quality part performance by having a nickel coating thickness of between 50% to 60% of the abrasive median diameter, uniformity of the nickel layer, abrasive distribution over the electroplated surface, and bright gloss. Electroplating parameters are set accordingly for this purpose. Industrial quality inspection for defects of these abrasive electroplated parts with optical inspection instruments is extremely challenging due to the diamond's light refraction, dispersion nature, and reflective bright nickel surface. The difficulty posed by this challenge requires parts to be quality inspected manually with an eye loupe that is subjective and costly. In this study, we use a Convolutional Neural Network (CNN) model in the production line to detect abrasive electroplated part anomalies allowing us to fix or eliminate those parts or elements that are in bad condition from the production chain and ultimately reduce manual quality inspection cost. We used 744 samples to train our model. Our model successfully identified over 99% of the parts with an anomaly. Keywords: Artificial Intelligence, Anomaly Detection, Industrial Quality Inspection, Electroplating, Diamond Abrasive Tool
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Exploring Content Relationships for Distilling Efficient GANs
- **Authors:** Lizhou You, Mingbao Lin, Tie Hu, Fei Chao, Rongrong Ji
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.11091
- **Pdf link:** https://arxiv.org/pdf/2212.11091
- **Abstract**
This paper proposes a content relationship distillation (CRD) to tackle the over-parameterized generative adversarial networks (GANs) for the serviceability in cutting-edge devices. In contrast to traditional instance-level distillation, we design a novel GAN compression oriented knowledge by slicing the contents of teacher outputs into multiple fine-grained granularities, such as row/column strips (global information) and image patches (local information), modeling the relationships among them, such as pairwise distance and triplet-wise angle, and encouraging the student to capture these relationships within its output contents. Built upon our proposed content-level distillation, we also deploy an online teacher discriminator, which keeps updating when co-trained with the teacher generator and keeps freezing when co-trained with the student generator for better adversarial training. We perform extensive experiments on three benchmark datasets, the results of which show that our CRD reaches the most complexity reduction on GANs while obtaining the best performance in comparison with existing methods. For example, we reduce MACs of CycleGAN by around 40x and parameters by over 80x, meanwhile, 46.61 FIDs are obtained compared with these of 51.92 for the current state-of-the-art. Code of this project is available at https://github.com/TheKernelZ/CRD.
### Continual Learning Approaches for Anomaly Detection
- **Authors:** Davide Dalle Pezze, Eugenia Anello, Chiara Masiero, Gian Antonio Susto
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.11192
- **Pdf link:** https://arxiv.org/pdf/2212.11192
- **Abstract**
Anomaly Detection is a relevant problem that arises in numerous real-world applications, especially when dealing with images. However, there has been little research for this task in the Continual Learning setting. In this work, we introduce a novel approach called SCALE (SCALing is Enough) to perform Compressed Replay in a framework for Anomaly Detection in Continual Learning setting. The proposed technique scales and compresses the original images using a Super Resolution model which, to the best of our knowledge, is studied for the first time in the Continual Learning setting. SCALE can achieve a high level of compression while maintaining a high level of image reconstruction quality. In conjunction with other Anomaly Detection approaches, it can achieve optimal results. To validate the proposed approach, we use a real-world dataset of images with pixel-based anomalies, with the scope to provide a reliable benchmark for Anomaly Detection in the context of Continual Learning, serving as a foundation for further advancements in the field.
## Keyword: RAW
### Privacy-Protecting Behaviours of Risk Detection in People with Dementia using Videos
- **Authors:** Pratik K. Mishra, Andrea Iaboni, Bing Ye, Kristine Newman, Alex Mihailidis, Shehroz S. Khan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.10682
- **Pdf link:** https://arxiv.org/pdf/2212.10682
- **Abstract**
People living with dementia often exhibit behavioural and psychological symptoms of dementia that can put their and others' safety at risk. Existing video surveillance systems in long-term care facilities can be used to monitor such behaviours of risk to alert the staff to prevent potential injuries or death in some cases. However, these behaviours of risk events are heterogeneous and infrequent in comparison to normal events. Moreover, analyzing raw videos can also raise privacy concerns. In this paper, we present two novel privacy-protecting video-based anomaly detection approaches to detect behaviours of risks in people with dementia. We either extracted body pose information as skeletons and use semantic segmentation masks to replace multiple humans in the scene with their semantic boundaries. Our work differs from most existing approaches for video anomaly detection that focus on appearance-based features, which can put the privacy of a person at risk and is also susceptible to pixel-based noise, including illumination and viewing direction. We used anonymized videos of normal activities to train customized spatio-temporal convolutional autoencoders and identify behaviours of risk as anomalies. We show our results on a real-world study conducted in a dementia care unit with patients with dementia, containing approximately 21 hours of normal activities data for training and 9 hours of data containing normal and behaviours of risk events for testing. We compared our approaches with the original RGB videos and obtained an equivalent area under the receiver operating characteristic curve performance of 0.807 for the skeleton-based approach and 0.823 for the segmentation mask-based approach. This is one of the first studies to incorporate privacy for the detection of behaviours of risks in people with dementia.
### Attention-Aware Anime Line Drawing Colorization
- **Authors:** Yu Cao, Hao Tian, P.Y. Mok
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Graphics (cs.GR); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2212.10988
- **Pdf link:** https://arxiv.org/pdf/2212.10988
- **Abstract**
Automatic colorization of anime line drawing has attracted much attention in recent years since it can substantially benefit the animation industry. User-hint based methods are the mainstream approach for line drawing colorization, while reference-based methods offer a more intuitive approach. Nevertheless, although reference-based methods can improve feature aggregation of the reference image and the line drawing, the colorization results are not compelling in terms of color consistency or semantic correspondence. In this paper, we introduce an attention-based model for anime line drawing colorization, in which a channel-wise and spatial-wise Convolutional Attention module is used to improve the ability of the encoder for feature extraction and key area perception, and a Stop-Gradient Attention module with cross-attention and self-attention is used to tackle the cross-domain long-range dependency problem. Extensive experiments show that our method outperforms other SOTA methods, with more accurate line structure and semantic color information.
### Deep set conditioned latent representations for action recognition
- **Authors:** Akash Singh, Tom De Schepper, Kevin Mets, Peter Hellinckx, Jose Oramas, Steven Latre
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.11030
- **Pdf link:** https://arxiv.org/pdf/2212.11030
- **Abstract**
In recent years multi-label, multi-class video action recognition has gained significant popularity. While reasoning over temporally connected atomic actions is mundane for intelligent species, standard artificial neural networks (ANN) still struggle to classify them. In the real world, atomic actions often temporally connect to form more complex composite actions. The challenge lies in recognising composite action of varying durations while other distinct composite or atomic actions occur in the background. Drawing upon the success of relational networks, we propose methods that learn to reason over the semantic concept of objects and actions. We empirically show how ANNs benefit from pretraining, relational inductive biases and unordered set-based latent representations. In this paper we propose deep set conditioned I3D (SCI3D), a two stream relational network that employs latent representation of state and visual representation for reasoning over events and actions. They learn to reason about temporally connected actions in order to identify all of them in the video. The proposed method achieves an improvement of around 1.49% mAP in atomic action recognition and 17.57% mAP in composite action recognition, over a I3D-NL baseline, on the CATER dataset.
### Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised Learning
- **Authors:** Julien Denize, Jaonary Rabarisoa, Astrid Orcesi, Romain Hérault
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.11187
- **Pdf link:** https://arxiv.org/pdf/2212.11187
- **Abstract**
Contrastive representation learning has proven to be an effective self-supervised learning method for images and videos. Most successful approaches are based on Noise Contrastive Estimation (NCE) and use different views of an instance as positives that should be contrasted with other instances, called negatives, that are considered as noise. However, several instances in a dataset are drawn from the same distribution and share underlying semantic information. A good data representation should contain relations between the instances, or semantic similarity and dissimilarity, that contrastive learning harms by considering all negatives as noise. To circumvent this issue, we propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE). Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances based on their learned similarities. We validate empirically our approach on both image and video representation learning. We show that SCE performs competitively with the state of the art on the ImageNet linear evaluation protocol for fewer pretraining epochs and that it generalizes to several downstream image tasks. We also show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Thu, 22 Dec 22 - ## Keyword: events
### Privacy-Protecting Behaviours of Risk Detection in People with Dementia using Videos
- **Authors:** Pratik K. Mishra, Andrea Iaboni, Bing Ye, Kristine Newman, Alex Mihailidis, Shehroz S. Khan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.10682
- **Pdf link:** https://arxiv.org/pdf/2212.10682
- **Abstract**
People living with dementia often exhibit behavioural and psychological symptoms of dementia that can put their and others' safety at risk. Existing video surveillance systems in long-term care facilities can be used to monitor such behaviours of risk to alert the staff to prevent potential injuries or death in some cases. However, these behaviours of risk events are heterogeneous and infrequent in comparison to normal events. Moreover, analyzing raw videos can also raise privacy concerns. In this paper, we present two novel privacy-protecting video-based anomaly detection approaches to detect behaviours of risks in people with dementia. We either extracted body pose information as skeletons and use semantic segmentation masks to replace multiple humans in the scene with their semantic boundaries. Our work differs from most existing approaches for video anomaly detection that focus on appearance-based features, which can put the privacy of a person at risk and is also susceptible to pixel-based noise, including illumination and viewing direction. We used anonymized videos of normal activities to train customized spatio-temporal convolutional autoencoders and identify behaviours of risk as anomalies. We show our results on a real-world study conducted in a dementia care unit with patients with dementia, containing approximately 21 hours of normal activities data for training and 9 hours of data containing normal and behaviours of risk events for testing. We compared our approaches with the original RGB videos and obtained an equivalent area under the receiver operating characteristic curve performance of 0.807 for the skeleton-based approach and 0.823 for the segmentation mask-based approach. This is one of the first studies to incorporate privacy for the detection of behaviours of risks in people with dementia.
### Deep set conditioned latent representations for action recognition
- **Authors:** Akash Singh, Tom De Schepper, Kevin Mets, Peter Hellinckx, Jose Oramas, Steven Latre
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.11030
- **Pdf link:** https://arxiv.org/pdf/2212.11030
- **Abstract**
In recent years multi-label, multi-class video action recognition has gained significant popularity. While reasoning over temporally connected atomic actions is mundane for intelligent species, standard artificial neural networks (ANN) still struggle to classify them. In the real world, atomic actions often temporally connect to form more complex composite actions. The challenge lies in recognising composite action of varying durations while other distinct composite or atomic actions occur in the background. Drawing upon the success of relational networks, we propose methods that learn to reason over the semantic concept of objects and actions. We empirically show how ANNs benefit from pretraining, relational inductive biases and unordered set-based latent representations. In this paper we propose deep set conditioned I3D (SCI3D), a two stream relational network that employs latent representation of state and visual representation for reasoning over events and actions. They learn to reason about temporally connected actions in order to identify all of them in the video. The proposed method achieves an improvement of around 1.49% mAP in atomic action recognition and 17.57% mAP in composite action recognition, over a I3D-NL baseline, on the CATER dataset.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### High-Throughput, High-Performance Deep Learning-Driven Light Guide Plate Surface Visual Quality Inspection Tailored for Real-World Manufacturing Environments
- **Authors:** Carol Xu, Mahmoud Famouri, Gautam Bathla, Mohammad Javad Shafiee, Alexander Wong
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2212.10632
- **Pdf link:** https://arxiv.org/pdf/2212.10632
- **Abstract**
Light guide plates are essential optical components widely used in a diverse range of applications ranging from medical lighting fixtures to back-lit TV displays. In this work, we introduce a fully-integrated, high-throughput, high-performance deep learning-driven workflow for light guide plate surface visual quality inspection (VQI) tailored for real-world manufacturing environments. To enable automated VQI on the edge computing within the fully-integrated VQI system, a highly compact deep anti-aliased attention condenser neural network (which we name LightDefectNet) tailored specifically for light guide plate surface defect detection in resource-constrained scenarios was created via machine-driven design exploration with computational and "best-practices" constraints as well as L_1 paired classification discrepancy loss. Experiments show that LightDetectNet achieves a detection accuracy of ~98.2% on the LGPSDD benchmark while having just 770K parameters (~33X and ~6.9X lower than ResNet-50 and EfficientNet-B0, respectively) and ~93M FLOPs (~88X and ~8.4X lower than ResNet-50 and EfficientNet-B0, respectively) and ~8.8X faster inference speed than EfficientNet-B0 on an embedded ARM processor. As such, the proposed deep learning-driven workflow, integrated with the aforementioned LightDefectNet neural network, is highly suited for high-throughput, high-performance light plate surface VQI within real-world manufacturing environments.
### Diamond Abrasive Electroplated Surface Anomaly Detection using Convolutional Neural Networks for Industrial Quality Inspection
- **Authors:** Parviz Ali
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.11122
- **Pdf link:** https://arxiv.org/pdf/2212.11122
- **Abstract**
Electroplated diamond abrasive tools require nickel coating on a metal surface for abrasive bonding and part functionality. The electroplated nickel-coated abrasive tool is expected to have a high-quality part performance by having a nickel coating thickness of between 50% to 60% of the abrasive median diameter, uniformity of the nickel layer, abrasive distribution over the electroplated surface, and bright gloss. Electroplating parameters are set accordingly for this purpose. Industrial quality inspection for defects of these abrasive electroplated parts with optical inspection instruments is extremely challenging due to the diamond's light refraction, dispersion nature, and reflective bright nickel surface. The difficulty posed by this challenge requires parts to be quality inspected manually with an eye loupe that is subjective and costly. In this study, we use a Convolutional Neural Network (CNN) model in the production line to detect abrasive electroplated part anomalies allowing us to fix or eliminate those parts or elements that are in bad condition from the production chain and ultimately reduce manual quality inspection cost. We used 744 samples to train our model. Our model successfully identified over 99% of the parts with an anomaly. Keywords: Artificial Intelligence, Anomaly Detection, Industrial Quality Inspection, Electroplating, Diamond Abrasive Tool
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Exploring Content Relationships for Distilling Efficient GANs
- **Authors:** Lizhou You, Mingbao Lin, Tie Hu, Fei Chao, Rongrong Ji
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.11091
- **Pdf link:** https://arxiv.org/pdf/2212.11091
- **Abstract**
This paper proposes a content relationship distillation (CRD) to tackle the over-parameterized generative adversarial networks (GANs) for the serviceability in cutting-edge devices. In contrast to traditional instance-level distillation, we design a novel GAN compression oriented knowledge by slicing the contents of teacher outputs into multiple fine-grained granularities, such as row/column strips (global information) and image patches (local information), modeling the relationships among them, such as pairwise distance and triplet-wise angle, and encouraging the student to capture these relationships within its output contents. Built upon our proposed content-level distillation, we also deploy an online teacher discriminator, which keeps updating when co-trained with the teacher generator and keeps freezing when co-trained with the student generator for better adversarial training. We perform extensive experiments on three benchmark datasets, the results of which show that our CRD reaches the most complexity reduction on GANs while obtaining the best performance in comparison with existing methods. For example, we reduce MACs of CycleGAN by around 40x and parameters by over 80x, meanwhile, 46.61 FIDs are obtained compared with these of 51.92 for the current state-of-the-art. Code of this project is available at https://github.com/TheKernelZ/CRD.
### Continual Learning Approaches for Anomaly Detection
- **Authors:** Davide Dalle Pezze, Eugenia Anello, Chiara Masiero, Gian Antonio Susto
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.11192
- **Pdf link:** https://arxiv.org/pdf/2212.11192
- **Abstract**
Anomaly Detection is a relevant problem that arises in numerous real-world applications, especially when dealing with images. However, there has been little research for this task in the Continual Learning setting. In this work, we introduce a novel approach called SCALE (SCALing is Enough) to perform Compressed Replay in a framework for Anomaly Detection in Continual Learning setting. The proposed technique scales and compresses the original images using a Super Resolution model which, to the best of our knowledge, is studied for the first time in the Continual Learning setting. SCALE can achieve a high level of compression while maintaining a high level of image reconstruction quality. In conjunction with other Anomaly Detection approaches, it can achieve optimal results. To validate the proposed approach, we use a real-world dataset of images with pixel-based anomalies, with the scope to provide a reliable benchmark for Anomaly Detection in the context of Continual Learning, serving as a foundation for further advancements in the field.
## Keyword: RAW
### Privacy-Protecting Behaviours of Risk Detection in People with Dementia using Videos
- **Authors:** Pratik K. Mishra, Andrea Iaboni, Bing Ye, Kristine Newman, Alex Mihailidis, Shehroz S. Khan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.10682
- **Pdf link:** https://arxiv.org/pdf/2212.10682
- **Abstract**
People living with dementia often exhibit behavioural and psychological symptoms of dementia that can put their and others' safety at risk. Existing video surveillance systems in long-term care facilities can be used to monitor such behaviours of risk to alert the staff to prevent potential injuries or death in some cases. However, these behaviours of risk events are heterogeneous and infrequent in comparison to normal events. Moreover, analyzing raw videos can also raise privacy concerns. In this paper, we present two novel privacy-protecting video-based anomaly detection approaches to detect behaviours of risks in people with dementia. We either extracted body pose information as skeletons and use semantic segmentation masks to replace multiple humans in the scene with their semantic boundaries. Our work differs from most existing approaches for video anomaly detection that focus on appearance-based features, which can put the privacy of a person at risk and is also susceptible to pixel-based noise, including illumination and viewing direction. We used anonymized videos of normal activities to train customized spatio-temporal convolutional autoencoders and identify behaviours of risk as anomalies. We show our results on a real-world study conducted in a dementia care unit with patients with dementia, containing approximately 21 hours of normal activities data for training and 9 hours of data containing normal and behaviours of risk events for testing. We compared our approaches with the original RGB videos and obtained an equivalent area under the receiver operating characteristic curve performance of 0.807 for the skeleton-based approach and 0.823 for the segmentation mask-based approach. This is one of the first studies to incorporate privacy for the detection of behaviours of risks in people with dementia.
### Attention-Aware Anime Line Drawing Colorization
- **Authors:** Yu Cao, Hao Tian, P.Y. Mok
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Graphics (cs.GR); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2212.10988
- **Pdf link:** https://arxiv.org/pdf/2212.10988
- **Abstract**
Automatic colorization of anime line drawing has attracted much attention in recent years since it can substantially benefit the animation industry. User-hint based methods are the mainstream approach for line drawing colorization, while reference-based methods offer a more intuitive approach. Nevertheless, although reference-based methods can improve feature aggregation of the reference image and the line drawing, the colorization results are not compelling in terms of color consistency or semantic correspondence. In this paper, we introduce an attention-based model for anime line drawing colorization, in which a channel-wise and spatial-wise Convolutional Attention module is used to improve the ability of the encoder for feature extraction and key area perception, and a Stop-Gradient Attention module with cross-attention and self-attention is used to tackle the cross-domain long-range dependency problem. Extensive experiments show that our method outperforms other SOTA methods, with more accurate line structure and semantic color information.
### Deep set conditioned latent representations for action recognition
- **Authors:** Akash Singh, Tom De Schepper, Kevin Mets, Peter Hellinckx, Jose Oramas, Steven Latre
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2212.11030
- **Pdf link:** https://arxiv.org/pdf/2212.11030
- **Abstract**
In recent years multi-label, multi-class video action recognition has gained significant popularity. While reasoning over temporally connected atomic actions is mundane for intelligent species, standard artificial neural networks (ANN) still struggle to classify them. In the real world, atomic actions often temporally connect to form more complex composite actions. The challenge lies in recognising composite action of varying durations while other distinct composite or atomic actions occur in the background. Drawing upon the success of relational networks, we propose methods that learn to reason over the semantic concept of objects and actions. We empirically show how ANNs benefit from pretraining, relational inductive biases and unordered set-based latent representations. In this paper we propose deep set conditioned I3D (SCI3D), a two stream relational network that employs latent representation of state and visual representation for reasoning over events and actions. They learn to reason about temporally connected actions in order to identify all of them in the video. The proposed method achieves an improvement of around 1.49% mAP in atomic action recognition and 17.57% mAP in composite action recognition, over a I3D-NL baseline, on the CATER dataset.
### Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised Learning
- **Authors:** Julien Denize, Jaonary Rabarisoa, Astrid Orcesi, Romain Hérault
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2212.11187
- **Pdf link:** https://arxiv.org/pdf/2212.11187
- **Abstract**
Contrastive representation learning has proven to be an effective self-supervised learning method for images and videos. Most successful approaches are based on Noise Contrastive Estimation (NCE) and use different views of an instance as positives that should be contrasted with other instances, called negatives, that are considered as noise. However, several instances in a dataset are drawn from the same distribution and share underlying semantic information. A good data representation should contain relations between the instances, or semantic similarity and dissimilarity, that contrastive learning harms by considering all negatives as noise. To circumvent this issue, we propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE). Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances based on their learned similarities. We validate empirically our approach on both image and video representation learning. We show that SCE performs competitively with the state of the art on the ImageNet linear evaluation protocol for fewer pretraining epochs and that it generalizes to several downstream image tasks. We also show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks.
## Keyword: raw image
There is no result
|
process
|
new submissions for thu dec keyword events privacy protecting behaviours of risk detection in people with dementia using videos authors pratik k mishra andrea iaboni bing ye kristine newman alex mihailidis shehroz s khan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract people living with dementia often exhibit behavioural and psychological symptoms of dementia that can put their and others safety at risk existing video surveillance systems in long term care facilities can be used to monitor such behaviours of risk to alert the staff to prevent potential injuries or death in some cases however these behaviours of risk events are heterogeneous and infrequent in comparison to normal events moreover analyzing raw videos can also raise privacy concerns in this paper we present two novel privacy protecting video based anomaly detection approaches to detect behaviours of risks in people with dementia we either extracted body pose information as skeletons and use semantic segmentation masks to replace multiple humans in the scene with their semantic boundaries our work differs from most existing approaches for video anomaly detection that focus on appearance based features which can put the privacy of a person at risk and is also susceptible to pixel based noise including illumination and viewing direction we used anonymized videos of normal activities to train customized spatio temporal convolutional autoencoders and identify behaviours of risk as anomalies we show our results on a real world study conducted in a dementia care unit with patients with dementia containing approximately hours of normal activities data for training and hours of data containing normal and behaviours of risk events for testing we compared our approaches with the original rgb videos and obtained an equivalent area under the receiver operating characteristic curve performance of for the skeleton based approach and for the segmentation mask based approach this is one of the first studies to incorporate privacy for the detection of behaviours of risks in people with dementia deep set conditioned latent representations for action recognition authors akash singh tom de schepper kevin mets peter hellinckx jose oramas steven latre subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in recent years multi label multi class video action recognition has gained significant popularity while reasoning over temporally connected atomic actions is mundane for intelligent species standard artificial neural networks ann still struggle to classify them in the real world atomic actions often temporally connect to form more complex composite actions the challenge lies in recognising composite action of varying durations while other distinct composite or atomic actions occur in the background drawing upon the success of relational networks we propose methods that learn to reason over the semantic concept of objects and actions we empirically show how anns benefit from pretraining relational inductive biases and unordered set based latent representations in this paper we propose deep set conditioned a two stream relational network that employs latent representation of state and visual representation for reasoning over events and actions they learn to reason about temporally connected actions in order to identify all of them in the video the proposed method achieves an improvement of around map in atomic action recognition and map in composite action recognition over a nl baseline on the cater dataset keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp high throughput high performance deep learning driven light guide plate surface visual quality inspection tailored for real world manufacturing environments authors carol xu mahmoud famouri gautam bathla mohammad javad shafiee alexander wong subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract light guide plates are essential optical components widely used in a diverse range of applications ranging from medical lighting fixtures to back lit tv displays in this work we introduce a fully integrated high throughput high performance deep learning driven workflow for light guide plate surface visual quality inspection vqi tailored for real world manufacturing environments to enable automated vqi on the edge computing within the fully integrated vqi system a highly compact deep anti aliased attention condenser neural network which we name lightdefectnet tailored specifically for light guide plate surface defect detection in resource constrained scenarios was created via machine driven design exploration with computational and best practices constraints as well as l paired classification discrepancy loss experiments show that lightdetectnet achieves a detection accuracy of on the lgpsdd benchmark while having just parameters and lower than resnet and efficientnet respectively and flops and lower than resnet and efficientnet respectively and faster inference speed than efficientnet on an embedded arm processor as such the proposed deep learning driven workflow integrated with the aforementioned lightdefectnet neural network is highly suited for high throughput high performance light plate surface vqi within real world manufacturing environments diamond abrasive electroplated surface anomaly detection using convolutional neural networks for industrial quality inspection authors parviz ali subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract electroplated diamond abrasive tools require nickel coating on a metal surface for abrasive bonding and part functionality the electroplated nickel coated abrasive tool is expected to have a high quality part performance by having a nickel coating thickness of between to of the abrasive median diameter uniformity of the nickel layer abrasive distribution over the electroplated surface and bright gloss electroplating parameters are set accordingly for this purpose industrial quality inspection for defects of these abrasive electroplated parts with optical inspection instruments is extremely challenging due to the diamond s light refraction dispersion nature and reflective bright nickel surface the difficulty posed by this challenge requires parts to be quality inspected manually with an eye loupe that is subjective and costly in this study we use a convolutional neural network cnn model in the production line to detect abrasive electroplated part anomalies allowing us to fix or eliminate those parts or elements that are in bad condition from the production chain and ultimately reduce manual quality inspection cost we used samples to train our model our model successfully identified over of the parts with an anomaly keywords artificial intelligence anomaly detection industrial quality inspection electroplating diamond abrasive tool keyword image signal processing there is no result keyword image signal process there is no result keyword compression exploring content relationships for distilling efficient gans authors lizhou you mingbao lin tie hu fei chao rongrong ji subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper proposes a content relationship distillation crd to tackle the over parameterized generative adversarial networks gans for the serviceability in cutting edge devices in contrast to traditional instance level distillation we design a novel gan compression oriented knowledge by slicing the contents of teacher outputs into multiple fine grained granularities such as row column strips global information and image patches local information modeling the relationships among them such as pairwise distance and triplet wise angle and encouraging the student to capture these relationships within its output contents built upon our proposed content level distillation we also deploy an online teacher discriminator which keeps updating when co trained with the teacher generator and keeps freezing when co trained with the student generator for better adversarial training we perform extensive experiments on three benchmark datasets the results of which show that our crd reaches the most complexity reduction on gans while obtaining the best performance in comparison with existing methods for example we reduce macs of cyclegan by around and parameters by over meanwhile fids are obtained compared with these of for the current state of the art code of this project is available at continual learning approaches for anomaly detection authors davide dalle pezze eugenia anello chiara masiero gian antonio susto subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract anomaly detection is a relevant problem that arises in numerous real world applications especially when dealing with images however there has been little research for this task in the continual learning setting in this work we introduce a novel approach called scale scaling is enough to perform compressed replay in a framework for anomaly detection in continual learning setting the proposed technique scales and compresses the original images using a super resolution model which to the best of our knowledge is studied for the first time in the continual learning setting scale can achieve a high level of compression while maintaining a high level of image reconstruction quality in conjunction with other anomaly detection approaches it can achieve optimal results to validate the proposed approach we use a real world dataset of images with pixel based anomalies with the scope to provide a reliable benchmark for anomaly detection in the context of continual learning serving as a foundation for further advancements in the field keyword raw privacy protecting behaviours of risk detection in people with dementia using videos authors pratik k mishra andrea iaboni bing ye kristine newman alex mihailidis shehroz s khan subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract people living with dementia often exhibit behavioural and psychological symptoms of dementia that can put their and others safety at risk existing video surveillance systems in long term care facilities can be used to monitor such behaviours of risk to alert the staff to prevent potential injuries or death in some cases however these behaviours of risk events are heterogeneous and infrequent in comparison to normal events moreover analyzing raw videos can also raise privacy concerns in this paper we present two novel privacy protecting video based anomaly detection approaches to detect behaviours of risks in people with dementia we either extracted body pose information as skeletons and use semantic segmentation masks to replace multiple humans in the scene with their semantic boundaries our work differs from most existing approaches for video anomaly detection that focus on appearance based features which can put the privacy of a person at risk and is also susceptible to pixel based noise including illumination and viewing direction we used anonymized videos of normal activities to train customized spatio temporal convolutional autoencoders and identify behaviours of risk as anomalies we show our results on a real world study conducted in a dementia care unit with patients with dementia containing approximately hours of normal activities data for training and hours of data containing normal and behaviours of risk events for testing we compared our approaches with the original rgb videos and obtained an equivalent area under the receiver operating characteristic curve performance of for the skeleton based approach and for the segmentation mask based approach this is one of the first studies to incorporate privacy for the detection of behaviours of risks in people with dementia attention aware anime line drawing colorization authors yu cao hao tian p y mok subjects computer vision and pattern recognition cs cv artificial intelligence cs ai graphics cs gr multimedia cs mm arxiv link pdf link abstract automatic colorization of anime line drawing has attracted much attention in recent years since it can substantially benefit the animation industry user hint based methods are the mainstream approach for line drawing colorization while reference based methods offer a more intuitive approach nevertheless although reference based methods can improve feature aggregation of the reference image and the line drawing the colorization results are not compelling in terms of color consistency or semantic correspondence in this paper we introduce an attention based model for anime line drawing colorization in which a channel wise and spatial wise convolutional attention module is used to improve the ability of the encoder for feature extraction and key area perception and a stop gradient attention module with cross attention and self attention is used to tackle the cross domain long range dependency problem extensive experiments show that our method outperforms other sota methods with more accurate line structure and semantic color information deep set conditioned latent representations for action recognition authors akash singh tom de schepper kevin mets peter hellinckx jose oramas steven latre subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in recent years multi label multi class video action recognition has gained significant popularity while reasoning over temporally connected atomic actions is mundane for intelligent species standard artificial neural networks ann still struggle to classify them in the real world atomic actions often temporally connect to form more complex composite actions the challenge lies in recognising composite action of varying durations while other distinct composite or atomic actions occur in the background drawing upon the success of relational networks we propose methods that learn to reason over the semantic concept of objects and actions we empirically show how anns benefit from pretraining relational inductive biases and unordered set based latent representations in this paper we propose deep set conditioned a two stream relational network that employs latent representation of state and visual representation for reasoning over events and actions they learn to reason about temporally connected actions in order to identify all of them in the video the proposed method achieves an improvement of around map in atomic action recognition and map in composite action recognition over a nl baseline on the cater dataset similarity contrastive estimation for image and video soft contrastive self supervised learning authors julien denize jaonary rabarisoa astrid orcesi romain hérault subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract contrastive representation learning has proven to be an effective self supervised learning method for images and videos most successful approaches are based on noise contrastive estimation nce and use different views of an instance as positives that should be contrasted with other instances called negatives that are considered as noise however several instances in a dataset are drawn from the same distribution and share underlying semantic information a good data representation should contain relations between the instances or semantic similarity and dissimilarity that contrastive learning harms by considering all negatives as noise to circumvent this issue we propose a novel formulation of contrastive learning using semantic similarity between instances called similarity contrastive estimation sce our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances based on their learned similarities we validate empirically our approach on both image and video representation learning we show that sce performs competitively with the state of the art on the imagenet linear evaluation protocol for fewer pretraining epochs and that it generalizes to several downstream image tasks we also show that sce reaches state of the art results for pretraining video representation and that the learned representation can generalize to video downstream tasks keyword raw image there is no result
| 1
|
5,705
| 8,167,584,322
|
IssuesEvent
|
2018-08-26 00:51:08
|
ValveSoftware/Proton
|
https://api.github.com/repos/ValveSoftware/Proton
|
closed
|
Into the Breach: Choppy sound
|
Game compatibility
|
Distribution: Arch Linux
Driver: Nvidia 390xx latest
Sound is choppy, extremely slow. Rest of the game runs fine.
|
True
|
Into the Breach: Choppy sound - Distribution: Arch Linux
Driver: Nvidia 390xx latest
Sound is choppy, extremely slow. Rest of the game runs fine.
|
non_process
|
into the breach choppy sound distribution arch linux driver nvidia latest sound is choppy extremely slow rest of the game runs fine
| 0
|
678,886
| 23,214,752,674
|
IssuesEvent
|
2022-08-02 13:16:04
|
permafrost06/contacts-manager-wp
|
https://api.github.com/repos/permafrost06/contacts-manager-wp
|
closed
|
Error and exception handlers could be used instead of try-catch
|
low priority question refactor
|
The PHP functions `set_error_handler` and `set_error_handler` look very interesting. Once exception and error codes are implemented and standardized for this project, the code would look a lot nicer with the use of these functions.
I'm not sure about readability after this change and if they'll be a great fit for this project.
|
1.0
|
Error and exception handlers could be used instead of try-catch - The PHP functions `set_error_handler` and `set_error_handler` look very interesting. Once exception and error codes are implemented and standardized for this project, the code would look a lot nicer with the use of these functions.
I'm not sure about readability after this change and if they'll be a great fit for this project.
|
non_process
|
error and exception handlers could be used instead of try catch the php functions set error handler and set error handler look very interesting once exception and error codes are implemented and standardized for this project the code would look a lot nicer with the use of these functions i m not sure about readability after this change and if they ll be a great fit for this project
| 0
|
5,808
| 8,644,714,551
|
IssuesEvent
|
2018-11-26 04:36:23
|
gfrebello/qs-trip-planning-procedure
|
https://api.github.com/repos/gfrebello/qs-trip-planning-procedure
|
closed
|
Update Use Case Diagram
|
Priority:High Process:Create/Update UseCase Model
|
The diagram for the use cases should be updated according to the changes made to the use cases themselves.
|
1.0
|
Update Use Case Diagram - The diagram for the use cases should be updated according to the changes made to the use cases themselves.
|
process
|
update use case diagram the diagram for the use cases should be updated according to the changes made to the use cases themselves
| 1
|
69,965
| 9,366,551,101
|
IssuesEvent
|
2019-04-03 01:19:56
|
edgedb/edgedb
|
https://api.github.com/repos/edgedb/edgedb
|
closed
|
improve eschema documentation
|
documentation
|
Missing documentation and (mostly) syntax tests for:
- [x] attributes
- [x] final concepts/atoms
- [x] abstract and delegated constraints
- [x] document `ON` for constraints and indexes (in particular that it requires parens)
|
1.0
|
improve eschema documentation - Missing documentation and (mostly) syntax tests for:
- [x] attributes
- [x] final concepts/atoms
- [x] abstract and delegated constraints
- [x] document `ON` for constraints and indexes (in particular that it requires parens)
|
non_process
|
improve eschema documentation missing documentation and mostly syntax tests for attributes final concepts atoms abstract and delegated constraints document on for constraints and indexes in particular that it requires parens
| 0
|
37,258
| 18,245,486,986
|
IssuesEvent
|
2021-10-01 17:48:04
|
alteryx/evalml
|
https://api.github.com/repos/alteryx/evalml
|
opened
|
Log transform regression target: update automl to try both ways when applicable
|
enhancement performance
|
Had convo with @bchen1116 and @chukarsten today about #2749, which tracks trying to improve the mathematical method we use in the log normal data check to recommend to users (and to automl) when to enable/disable log transformation for regression targets.
That work will continue, but filing a separate idea for improvement here:
If the data check recommends we apply log transform, perhaps automl should still try pipelines with both transformed and un-transformed targets, place all of those on the rankings leaderboard and let users select between them.
There's probably more complex heuristics we can try too now that we're moving towards @jeremyliweishih 's work on "DefaultAlgorithm": before scanning estimators, as part of the preprocessing/selection/eng discovery step, pick a pipeline to try both with and without log transform, and use the outcome of that experiment to decide whether automl should continue applying log transform for other pipelines or should just stop exploring that option.
|
True
|
Log transform regression target: update automl to try both ways when applicable - Had convo with @bchen1116 and @chukarsten today about #2749, which tracks trying to improve the mathematical method we use in the log normal data check to recommend to users (and to automl) when to enable/disable log transformation for regression targets.
That work will continue, but filing a separate idea for improvement here:
If the data check recommends we apply log transform, perhaps automl should still try pipelines with both transformed and un-transformed targets, place all of those on the rankings leaderboard and let users select between them.
There's probably more complex heuristics we can try too now that we're moving towards @jeremyliweishih 's work on "DefaultAlgorithm": before scanning estimators, as part of the preprocessing/selection/eng discovery step, pick a pipeline to try both with and without log transform, and use the outcome of that experiment to decide whether automl should continue applying log transform for other pipelines or should just stop exploring that option.
|
non_process
|
log transform regression target update automl to try both ways when applicable had convo with and chukarsten today about which tracks trying to improve the mathematical method we use in the log normal data check to recommend to users and to automl when to enable disable log transformation for regression targets that work will continue but filing a separate idea for improvement here if the data check recommends we apply log transform perhaps automl should still try pipelines with both transformed and un transformed targets place all of those on the rankings leaderboard and let users select between them there s probably more complex heuristics we can try too now that we re moving towards jeremyliweishih s work on defaultalgorithm before scanning estimators as part of the preprocessing selection eng discovery step pick a pipeline to try both with and without log transform and use the outcome of that experiment to decide whether automl should continue applying log transform for other pipelines or should just stop exploring that option
| 0
|
4,964
| 7,806,210,653
|
IssuesEvent
|
2018-06-11 13:27:43
|
cptechinc/soft-dpluso
|
https://api.github.com/repos/cptechinc/soft-dpluso
|
closed
|
User Role dashboards
|
PHP Permissions Processwire enhancement
|
The main objective is to have dashboards that are made for the User role.
Make a new branch called user-dashboards
Files/Directories that will be affected:
**site/templates/configs
site/templates/redir/redirect.php
site/templates/home.php
site/templates/dashboard**
First we'll have to create config in site/templates/configs/ called user-roles-config.php
In there we'll make an array that looks like
```
$config->user_roles = array(
'sales-manager' => array(
'label' => 'Sales Manager',
'dashboard-redirect' => false
),
'sales-rep => array(
'label' => 'Sales Rep',
'dashboard-redirect' => false
)
);
```
Indent the above code as needed.
## site/templates/_func.php
go to the function` setupuser($loginID)`;
after the line $loginrecord =, add a new line that is `$user = get_logmuser($loginID)` where loginID is a key value in the `$loginrecord `array;
Then add a new line at the end of the function making a new property for the user called role and set it equal to `$user->role`;
## site/templates/home.php
Don't delete the header () line until the end.
Then do an if statement if `$user->role `is in the array `$config->user_roles` as a key.
if not, go to dashboard like it currently does. If it exists then look at `$config->user_roles[$logmuser->role]['dashboard-redirect']` , check if it's empty if it's empty then go to the dashboard, if not then
redirect to that page.
## site/templates/template-dashboard.php
Once again you'll call the `get_logmuser($loginID)` function, then do an if statement if `$logmuser->role` is in the array `$config->user_roles` as a key and if it is, then you'll make `$page->body = $config->paths->content."dashboard/$user->role-dashboard.php"`. if it's not in the array make the `$page->body` = the current `$page->body` value;
## site/templates/content/dashboard/
Make two files **site/templates/content/dashboard/sales-manager-dashboard.php**, **site/templates/content/dashboard/sales-rep-dashboard.php**
Then copy the code from **site/templates/content/dashboard/dashboard-page-outline.php** into both, at the top for debugging add an h1 and inside it have the text salesrep or sales manager.
|
1.0
|
User Role dashboards - The main objective is to have dashboards that are made for the User role.
Make a new branch called user-dashboards
Files/Directories that will be affected:
**site/templates/configs
site/templates/redir/redirect.php
site/templates/home.php
site/templates/dashboard**
First we'll have to create config in site/templates/configs/ called user-roles-config.php
In there we'll make an array that looks like
```
$config->user_roles = array(
'sales-manager' => array(
'label' => 'Sales Manager',
'dashboard-redirect' => false
),
'sales-rep => array(
'label' => 'Sales Rep',
'dashboard-redirect' => false
)
);
```
Indent the above code as needed.
## site/templates/_func.php
go to the function` setupuser($loginID)`;
after the line $loginrecord =, add a new line that is `$user = get_logmuser($loginID)` where loginID is a key value in the `$loginrecord `array;
Then add a new line at the end of the function making a new property for the user called role and set it equal to `$user->role`;
## site/templates/home.php
Don't delete the header () line until the end.
Then do an if statement if `$user->role `is in the array `$config->user_roles` as a key.
if not, go to dashboard like it currently does. If it exists then look at `$config->user_roles[$logmuser->role]['dashboard-redirect']` , check if it's empty if it's empty then go to the dashboard, if not then
redirect to that page.
## site/templates/template-dashboard.php
Once again you'll call the `get_logmuser($loginID)` function, then do an if statement if `$logmuser->role` is in the array `$config->user_roles` as a key and if it is, then you'll make `$page->body = $config->paths->content."dashboard/$user->role-dashboard.php"`. if it's not in the array make the `$page->body` = the current `$page->body` value;
## site/templates/content/dashboard/
Make two files **site/templates/content/dashboard/sales-manager-dashboard.php**, **site/templates/content/dashboard/sales-rep-dashboard.php**
Then copy the code from **site/templates/content/dashboard/dashboard-page-outline.php** into both, at the top for debugging add an h1 and inside it have the text salesrep or sales manager.
|
process
|
user role dashboards the main objective is to have dashboards that are made for the user role make a new branch called user dashboards files directories that will be affected site templates configs site templates redir redirect php site templates home php site templates dashboard first we ll have to create config in site templates configs called user roles config php in there we ll make an array that looks like config user roles array sales manager array label sales manager dashboard redirect false sales rep array label sales rep dashboard redirect false indent the above code as needed site templates func php go to the function setupuser loginid after the line loginrecord add a new line that is user get logmuser loginid where loginid is a key value in the loginrecord array then add a new line at the end of the function making a new property for the user called role and set it equal to user role site templates home php don t delete the header line until the end then do an if statement if user role is in the array config user roles as a key if not go to dashboard like it currently does if it exists then look at config user roles check if it s empty if it s empty then go to the dashboard if not then redirect to that page site templates template dashboard php once again you ll call the get logmuser loginid function then do an if statement if logmuser role is in the array config user roles as a key and if it is then you ll make page body config paths content dashboard user role dashboard php if it s not in the array make the page body the current page body value site templates content dashboard make two files site templates content dashboard sales manager dashboard php site templates content dashboard sales rep dashboard php then copy the code from site templates content dashboard dashboard page outline php into both at the top for debugging add an and inside it have the text salesrep or sales manager
| 1
|
307,744
| 23,214,482,605
|
IssuesEvent
|
2022-08-02 13:04:23
|
bbangsyoung/fastcampus-project-board
|
https://api.github.com/repos/bbangsyoung/fastcampus-project-board
|
closed
|
깃헙 프로젝트와 이슈 정리하기
|
documentation
|
깃헙 프로젝트를 세팅하고, 카드를 만들어 정리하자.
- [ ] 프로젝트 베타 만들기
- [ ] 카드목록 만들기 - 강의 커리큘럼 참고
- [x] 이슈로 적절히 바꾸기
|
1.0
|
깃헙 프로젝트와 이슈 정리하기 - 깃헙 프로젝트를 세팅하고, 카드를 만들어 정리하자.
- [ ] 프로젝트 베타 만들기
- [ ] 카드목록 만들기 - 강의 커리큘럼 참고
- [x] 이슈로 적절히 바꾸기
|
non_process
|
깃헙 프로젝트와 이슈 정리하기 깃헙 프로젝트를 세팅하고 카드를 만들어 정리하자 프로젝트 베타 만들기 카드목록 만들기 강의 커리큘럼 참고 이슈로 적절히 바꾸기
| 0
|
20,287
| 26,921,709,758
|
IssuesEvent
|
2023-02-07 10:55:30
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Cannot spawn shell script if path has spaces
|
child_process doc
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in PowerShell console (Windows)
Subsystem: if known, please specify affected core module name
-->
* **Version**: 14.16.0
* **Platform**: MacOS 10.15.7 (Windows and Linux too)
* **Subsystem**: child_process
### What steps will reproduce the bug?
1. Create a javascript program that spawns a shell script.
2. Ensure that the full path to the shell script has spaces in the directory names.
3. Run the javascript program.
<!--
Enter details about your bug, preferably a simple code snippet that can be
run using `node` directly without installing third-party dependencies.
-->
```javascript
async function executeChildProcess(executable, argList) {
const child = spawn(executable, argList, {
stdio: [ 'pipe', 'pipe', 'pipe' ],
shell: true,
detached: false,
windowsHide: true
});
await onExit(child)
.then(() => console.log(`${executable} exited with exit code 0`))
.catch((err) => console.log(`${executable} failed: ${err.toString()}`));
return child.exitCode;
}
executeChildProcess('/Applications/My Application.app/Contents/tools/myScript.sh', [ '-v' ]).then();
```
### How often does it reproduce? Is there a required condition?
Every single time. Yes, the child_process module should be properly escaping commands that have spaces in them since we cannot control where the user installs the application.
### What is the expected behavior?
The shell script is executed properly
<!--
If possible please provide textual output instead of screenshots.
-->
### What do you see instead?
/bin/sh: /Applications/My: No such file or directory
Applications/My Application.app/Contents/tools/myScript.sh failed: Error: Exit with error code: 127
<!--
If possible please provide textual output instead of screenshots.
-->
### Additional information
<!--
Tell us anything else you think we should know.
-->
|
1.0
|
Cannot spawn shell script if path has spaces - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or output of `"$([Environment]::OSVersion | ForEach-Object VersionString) $(if ([Environment]::Is64BitOperatingSystem) { "x64" } else { "x86" })"` in PowerShell console (Windows)
Subsystem: if known, please specify affected core module name
-->
* **Version**: 14.16.0
* **Platform**: MacOS 10.15.7 (Windows and Linux too)
* **Subsystem**: child_process
### What steps will reproduce the bug?
1. Create a javascript program that spawns a shell script.
2. Ensure that the full path to the shell script has spaces in the directory names.
3. Run the javascript program.
<!--
Enter details about your bug, preferably a simple code snippet that can be
run using `node` directly without installing third-party dependencies.
-->
```javascript
async function executeChildProcess(executable, argList) {
const child = spawn(executable, argList, {
stdio: [ 'pipe', 'pipe', 'pipe' ],
shell: true,
detached: false,
windowsHide: true
});
await onExit(child)
.then(() => console.log(`${executable} exited with exit code 0`))
.catch((err) => console.log(`${executable} failed: ${err.toString()}`));
return child.exitCode;
}
executeChildProcess('/Applications/My Application.app/Contents/tools/myScript.sh', [ '-v' ]).then();
```
### How often does it reproduce? Is there a required condition?
Every single time. Yes, the child_process module should be properly escaping commands that have spaces in them since we cannot control where the user installs the application.
### What is the expected behavior?
The shell script is executed properly
<!--
If possible please provide textual output instead of screenshots.
-->
### What do you see instead?
/bin/sh: /Applications/My: No such file or directory
Applications/My Application.app/Contents/tools/myScript.sh failed: Error: Exit with error code: 127
<!--
If possible please provide textual output instead of screenshots.
-->
### Additional information
<!--
Tell us anything else you think we should know.
-->
|
process
|
cannot spawn shell script if path has spaces thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or output of osversion foreach object versionstring if else in powershell console windows subsystem if known please specify affected core module name version platform macos windows and linux too subsystem child process what steps will reproduce the bug create a javascript program that spawns a shell script ensure that the full path to the shell script has spaces in the directory names run the javascript program enter details about your bug preferably a simple code snippet that can be run using node directly without installing third party dependencies javascript async function executechildprocess executable arglist const child spawn executable arglist stdio shell true detached false windowshide true await onexit child then console log executable exited with exit code catch err console log executable failed err tostring return child exitcode executechildprocess applications my application app contents tools myscript sh then how often does it reproduce is there a required condition every single time yes the child process module should be properly escaping commands that have spaces in them since we cannot control where the user installs the application what is the expected behavior the shell script is executed properly if possible please provide textual output instead of screenshots what do you see instead bin sh applications my no such file or directory applications my application app contents tools myscript sh failed error exit with error code if possible please provide textual output instead of screenshots additional information tell us anything else you think we should know
| 1
|
7,552
| 10,675,566,925
|
IssuesEvent
|
2019-10-21 12:00:02
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
System.Diagnostics.Process tests fail on UAP with StackOverflowException
|
area-System.Diagnostics.Process disabled-test os-windows-uwp
|
Disabling the tests for now to enable CI leg. Grep for `[ActiveIssue(31908, TargetFrameworkMonikers.Uap)]`
```
C:\git\corefx2\src\System.Diagnostics.Process\tests>dotnet msbuild /t:RebuildAndTest /p:TargetGroup=uap
Microsoft (R) Build Engine version 15.8.166+gd4e8d81a88 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
RemoteExecutorConsoleApp -> C:\git\corefx2\bin\AnyOS.AnyCPU.Debug\RemoteExecutorConsoleApp\netstandard\RemoteExecutorConsoleApp.exe
System.Diagnostics.Process.Tests -> C:\git\corefx2\bin\Windows_NT.AnyCPU.Debug\System.Diagnostics.Process.Tests\uap\System.Diagnostics.Process.Tests.dll
Using C:\git\corefx2\bin\testhost\uap-Windows_NT-Debug-x64\ as the test runtime folder.
Executing in C:\git\corefx2\bin\tests\System.Diagnostics.Process.Tests\uap-Windows_NT-Debug-x64\
1 file(s) copied.
Got manifest file appxmanifest.xml
Removing any previous installation...
Installing the application...
Package Full Name is 5cd54353-3ed7-4a6e-a72f-db349f28867c_1.0.0.0_x64__v52bfwc2c21ha
SUCCESS
ExitCode 100
Discovering: System.Diagnostics.Process.Tests
Discovered: System.Diagnostics.Process.Tests
Starting: System.Diagnostics.Process.Tests
Process is terminating due to StackOverflowException.
Attempting to cancel the build...
^CC:\git\corefx2\Tools\tests.targets(588,5): warning MSB5021: Terminating the task executable "cmd" and its child processes because the build was canceled. [C:\git\corefx2\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
```
|
1.0
|
System.Diagnostics.Process tests fail on UAP with StackOverflowException - Disabling the tests for now to enable CI leg. Grep for `[ActiveIssue(31908, TargetFrameworkMonikers.Uap)]`
```
C:\git\corefx2\src\System.Diagnostics.Process\tests>dotnet msbuild /t:RebuildAndTest /p:TargetGroup=uap
Microsoft (R) Build Engine version 15.8.166+gd4e8d81a88 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
RemoteExecutorConsoleApp -> C:\git\corefx2\bin\AnyOS.AnyCPU.Debug\RemoteExecutorConsoleApp\netstandard\RemoteExecutorConsoleApp.exe
System.Diagnostics.Process.Tests -> C:\git\corefx2\bin\Windows_NT.AnyCPU.Debug\System.Diagnostics.Process.Tests\uap\System.Diagnostics.Process.Tests.dll
Using C:\git\corefx2\bin\testhost\uap-Windows_NT-Debug-x64\ as the test runtime folder.
Executing in C:\git\corefx2\bin\tests\System.Diagnostics.Process.Tests\uap-Windows_NT-Debug-x64\
1 file(s) copied.
Got manifest file appxmanifest.xml
Removing any previous installation...
Installing the application...
Package Full Name is 5cd54353-3ed7-4a6e-a72f-db349f28867c_1.0.0.0_x64__v52bfwc2c21ha
SUCCESS
ExitCode 100
Discovering: System.Diagnostics.Process.Tests
Discovered: System.Diagnostics.Process.Tests
Starting: System.Diagnostics.Process.Tests
Process is terminating due to StackOverflowException.
Attempting to cancel the build...
^CC:\git\corefx2\Tools\tests.targets(588,5): warning MSB5021: Terminating the task executable "cmd" and its child processes because the build was canceled. [C:\git\corefx2\src\System.Diagnostics.Process\tests\System.Diagnostics.Process.Tests.csproj]
```
|
process
|
system diagnostics process tests fail on uap with stackoverflowexception disabling the tests for now to enable ci leg grep for c git src system diagnostics process tests dotnet msbuild t rebuildandtest p targetgroup uap microsoft r build engine version for net core copyright c microsoft corporation all rights reserved remoteexecutorconsoleapp c git bin anyos anycpu debug remoteexecutorconsoleapp netstandard remoteexecutorconsoleapp exe system diagnostics process tests c git bin windows nt anycpu debug system diagnostics process tests uap system diagnostics process tests dll using c git bin testhost uap windows nt debug as the test runtime folder executing in c git bin tests system diagnostics process tests uap windows nt debug file s copied got manifest file appxmanifest xml removing any previous installation installing the application package full name is success exitcode discovering system diagnostics process tests discovered system diagnostics process tests starting system diagnostics process tests process is terminating due to stackoverflowexception attempting to cancel the build cc git tools tests targets warning terminating the task executable cmd and its child processes because the build was canceled
| 1
|
453,245
| 13,067,051,722
|
IssuesEvent
|
2020-07-30 23:15:44
|
GoogleCloudPlatform/nodejs-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/nodejs-docs-samples
|
closed
|
run: add minimal e2e test coverage to sample apps
|
api: run priority: p2 type: cleanup
|
Each Cloud Run sample app should have minimal e2e test coverage that establishes the following:
* Container builds
* Container deploys to Cloud Run
* The nodejs service inside the container is able to start and respond to requests
For minimal testing we are not concerned with whether the logic of the node.js service is correct, as long as a 200 or 400 response is received we can be confident it was produced by the running node.js service.
|
1.0
|
run: add minimal e2e test coverage to sample apps - Each Cloud Run sample app should have minimal e2e test coverage that establishes the following:
* Container builds
* Container deploys to Cloud Run
* The nodejs service inside the container is able to start and respond to requests
For minimal testing we are not concerned with whether the logic of the node.js service is correct, as long as a 200 or 400 response is received we can be confident it was produced by the running node.js service.
|
non_process
|
run add minimal test coverage to sample apps each cloud run sample app should have minimal test coverage that establishes the following container builds container deploys to cloud run the nodejs service inside the container is able to start and respond to requests for minimal testing we are not concerned with whether the logic of the node js service is correct as long as a or response is received we can be confident it was produced by the running node js service
| 0
|
22,759
| 6,290,115,009
|
IssuesEvent
|
2017-07-19 20:46:36
|
NYPL-discovery/discovery-front-end
|
https://api.github.com/repos/NYPL-discovery/discovery-front-end
|
closed
|
Search Results number display
|
code refactor in progress Search Results
|
Add back the number of search results. Modify the component to just `x results`.
|
1.0
|
Search Results number display - Add back the number of search results. Modify the component to just `x results`.
|
non_process
|
search results number display add back the number of search results modify the component to just x results
| 0
|
14,403
| 17,456,407,054
|
IssuesEvent
|
2021-08-06 02:20:18
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
opened
|
Address flaky proxy logging tests
|
process: flaky test
|
The Cypress dashboard shows tests added and modified as part of the Proxy Logging PR #16730 to be quite flaky.
This is probably because of how log updates are debounced in the driver, causing flaky testing behavior. There should be a good way to address this using Cypress.
|
1.0
|
Address flaky proxy logging tests - The Cypress dashboard shows tests added and modified as part of the Proxy Logging PR #16730 to be quite flaky.
This is probably because of how log updates are debounced in the driver, causing flaky testing behavior. There should be a good way to address this using Cypress.
|
process
|
address flaky proxy logging tests the cypress dashboard shows tests added and modified as part of the proxy logging pr to be quite flaky this is probably because of how log updates are debounced in the driver causing flaky testing behavior there should be a good way to address this using cypress
| 1
|
73,770
| 15,282,044,610
|
IssuesEvent
|
2021-02-23 09:02:16
|
maxotta/kiv-psi
|
https://api.github.com/repos/maxotta/kiv-psi
|
opened
|
CVE-2019-10744 (High) detected in lodash.mergewith-4.6.1.tgz, lodash-4.17.11.tgz
|
security vulnerability
|
## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash.mergewith-4.6.1.tgz</b>, <b>lodash-4.17.11.tgz</b></p></summary>
<p>
<details><summary><b>lodash.mergewith-4.6.1.tgz</b></p></summary>
<p>The Lodash method `_.mergeWith` exported as a module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash.mergewith/-/lodash.mergewith-4.6.1.tgz">https://registry.npmjs.org/lodash.mergewith/-/lodash.mergewith-4.6.1.tgz</a></p>
<p>
Dependency Hierarchy:
- build-angular-0.8.9.tgz (Root Library)
- node-sass-4.11.0.tgz
- :x: **lodash.mergewith-4.6.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: kiv-psi/rest-api/country-app/package.json</p>
<p>Path to vulnerable library: kiv-psi/rest-api/country-app/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.0.0.tgz (Root Library)
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/maxotta/kiv-psi/commit/cf458398928043f55d46c626b0b152f9b7a275c3">cf458398928043f55d46c626b0b152f9b7a275c3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jf85-cpcp-j695">https://github.com/advisories/GHSA-jf85-cpcp-j695</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-10744 (High) detected in lodash.mergewith-4.6.1.tgz, lodash-4.17.11.tgz - ## CVE-2019-10744 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash.mergewith-4.6.1.tgz</b>, <b>lodash-4.17.11.tgz</b></p></summary>
<p>
<details><summary><b>lodash.mergewith-4.6.1.tgz</b></p></summary>
<p>The Lodash method `_.mergeWith` exported as a module.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash.mergewith/-/lodash.mergewith-4.6.1.tgz">https://registry.npmjs.org/lodash.mergewith/-/lodash.mergewith-4.6.1.tgz</a></p>
<p>
Dependency Hierarchy:
- build-angular-0.8.9.tgz (Root Library)
- node-sass-4.11.0.tgz
- :x: **lodash.mergewith-4.6.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: kiv-psi/rest-api/country-app/package.json</p>
<p>Path to vulnerable library: kiv-psi/rest-api/country-app/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.0.0.tgz (Root Library)
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/maxotta/kiv-psi/commit/cf458398928043f55d46c626b0b152f9b7a275c3">cf458398928043f55d46c626b0b152f9b7a275c3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10744>CVE-2019-10744</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jf85-cpcp-j695">https://github.com/advisories/GHSA-jf85-cpcp-j695</a></p>
<p>Release Date: 2019-07-08</p>
<p>Fix Resolution: lodash-4.17.12, lodash-amd-4.17.12, lodash-es-4.17.12, lodash.defaultsdeep-4.6.1, lodash.merge- 4.6.2, lodash.mergewith-4.6.2, lodash.template-4.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in lodash mergewith tgz lodash tgz cve high severity vulnerability vulnerable libraries lodash mergewith tgz lodash tgz lodash mergewith tgz the lodash method mergewith exported as a module library home page a href dependency hierarchy build angular tgz root library node sass tgz x lodash mergewith tgz vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file kiv psi rest api country app package json path to vulnerable library kiv psi rest api country app node modules lodash package json dependency hierarchy karma tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of lodash lower than are vulnerable to prototype pollution the function defaultsdeep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash lodash amd lodash es lodash defaultsdeep lodash merge lodash mergewith lodash template step up your open source security game with whitesource
| 0
|
16,116
| 20,379,189,153
|
IssuesEvent
|
2022-02-21 19:09:19
|
tdwg/chrono
|
https://api.github.com/repos/tdwg/chrono
|
opened
|
Term details show incorrect histories
|
Term - change Process - ready to implement
|
Examples (not necessarily an exhaustive list):
http://rs.tdwg.org/chrono/terms/version/earliestChronometricAge-2021-01-26.htm shows that it replaces http://rs.tdwg.org/chrono/terms/version/minimumChronometricAge-2020-11-17 whereas it should replace http://rs.tdwg.org/chrono/terms/version/maximumChronometricAge-2020-11-17
http://rs.tdwg.org/chrono/terms/version/latestChronometricAge-2021-01-26.htm shows that it replaces http://rs.tdwg.org/chrono/terms/version/maximumChronometricAge-2020-11-17 whereas it should replace http://rs.tdwg.org/chrono/terms/version/minimumChronometricAge-2020-11-17
http://rs.tdwg.org/chrono/terms/minimumChronometricAge shows that it is replaced by http://rs.tdwg.org/chrono/terms/earliestChronometricAgeReferenceSystem whereas it should be replaced by http://rs.tdwg.org/chrono/terms/latestChronometricAge
http://rs.tdwg.org/chrono/terms/maximumChronometricAge shows that it is replaced by http://rs.tdwg.org/chrono/terms/latestChronometricAge whereas it should be replaced by http://rs.tdwg.org/chrono/terms/earliestChronometricAge
|
1.0
|
Term details show incorrect histories - Examples (not necessarily an exhaustive list):
http://rs.tdwg.org/chrono/terms/version/earliestChronometricAge-2021-01-26.htm shows that it replaces http://rs.tdwg.org/chrono/terms/version/minimumChronometricAge-2020-11-17 whereas it should replace http://rs.tdwg.org/chrono/terms/version/maximumChronometricAge-2020-11-17
http://rs.tdwg.org/chrono/terms/version/latestChronometricAge-2021-01-26.htm shows that it replaces http://rs.tdwg.org/chrono/terms/version/maximumChronometricAge-2020-11-17 whereas it should replace http://rs.tdwg.org/chrono/terms/version/minimumChronometricAge-2020-11-17
http://rs.tdwg.org/chrono/terms/minimumChronometricAge shows that it is replaced by http://rs.tdwg.org/chrono/terms/earliestChronometricAgeReferenceSystem whereas it should be replaced by http://rs.tdwg.org/chrono/terms/latestChronometricAge
http://rs.tdwg.org/chrono/terms/maximumChronometricAge shows that it is replaced by http://rs.tdwg.org/chrono/terms/latestChronometricAge whereas it should be replaced by http://rs.tdwg.org/chrono/terms/earliestChronometricAge
|
process
|
term details show incorrect histories examples not necessarily an exhaustive list shows that it replaces whereas it should replace shows that it replaces whereas it should replace shows that it is replaced by whereas it should be replaced by shows that it is replaced by whereas it should be replaced by
| 1
|
11,722
| 8,476,093,309
|
IssuesEvent
|
2018-10-24 20:50:30
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
opened
|
Server side ALPN support for SslStream on Mac
|
area-System.Net.Security enhancement os-mac-os-x
|
This is follow-up on #30492
SSL API on OSX/MacOS does not provide sufficient functionality to implement server side ALPN.
That is primarily missing code to parse client hello in SSLProcessClientHelloExtensions() and callback (or some other mechanism) for server to choose from provided list.
|
True
|
Server side ALPN support for SslStream on Mac - This is follow-up on #30492
SSL API on OSX/MacOS does not provide sufficient functionality to implement server side ALPN.
That is primarily missing code to parse client hello in SSLProcessClientHelloExtensions() and callback (or some other mechanism) for server to choose from provided list.
|
non_process
|
server side alpn support for sslstream on mac this is follow up on ssl api on osx macos does not provide sufficient functionality to implement server side alpn that is primarily missing code to parse client hello in sslprocessclienthelloextensions and callback or some other mechanism for server to choose from provided list
| 0
|
2,697
| 3,005,058,038
|
IssuesEvent
|
2015-07-26 15:41:02
|
JuliaLang/julia
|
https://api.github.com/repos/JuliaLang/julia
|
closed
|
Julia does not build when /bin/sh is not bash
|
build
|
When /bin/sh is dash rather than bash, `make` fails with
```
make[2]: Leaving directory '/var/tmp/portage/dev-lang/julia-9999/work/julia-9999/src/flisp'
./flisp/flisp ./mk_julia_flisp_boot.scm
type error: file: expected string, got #f
in file ./mk_julia_flisp_boot.scm
Makefile:90: recipe for target 'julia_flisp.boot' failed
make[1]: *** [julia_flisp.boot] Error 1
```
because `export julia_flisp.boot=$(BUILDDIR)/julia_flisp.boot` in `src/Makefile` does not work in other shells due to the dot in the variable name. One way to solve the issue is by setting `SHELL = /bin/bash` in the Makefile. Another is by removing the dot from the Makefile and `mk_julia_flisp_boot.scm`.
|
1.0
|
Julia does not build when /bin/sh is not bash - When /bin/sh is dash rather than bash, `make` fails with
```
make[2]: Leaving directory '/var/tmp/portage/dev-lang/julia-9999/work/julia-9999/src/flisp'
./flisp/flisp ./mk_julia_flisp_boot.scm
type error: file: expected string, got #f
in file ./mk_julia_flisp_boot.scm
Makefile:90: recipe for target 'julia_flisp.boot' failed
make[1]: *** [julia_flisp.boot] Error 1
```
because `export julia_flisp.boot=$(BUILDDIR)/julia_flisp.boot` in `src/Makefile` does not work in other shells due to the dot in the variable name. One way to solve the issue is by setting `SHELL = /bin/bash` in the Makefile. Another is by removing the dot from the Makefile and `mk_julia_flisp_boot.scm`.
|
non_process
|
julia does not build when bin sh is not bash when bin sh is dash rather than bash make fails with make leaving directory var tmp portage dev lang julia work julia src flisp flisp flisp mk julia flisp boot scm type error file expected string got f in file mk julia flisp boot scm makefile recipe for target julia flisp boot failed make error because export julia flisp boot builddir julia flisp boot in src makefile does not work in other shells due to the dot in the variable name one way to solve the issue is by setting shell bin bash in the makefile another is by removing the dot from the makefile and mk julia flisp boot scm
| 0
|
13,022
| 15,377,835,936
|
IssuesEvent
|
2021-03-02 17:33:29
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
'viral process' may need a different parent class
|
multi-species process
|
It occurred to me that not all viral processes are going to be directly involved in a symbiotic interaction. Two examples that are child classes of 'viral process' are 'viral life cycle' (GO:0019058) and 'virus maturation' (GO:0019075). I'm not entirely sure of a better parent class for it yet, but I'm leaning toward 'biological_process' being the best one.
|
1.0
|
'viral process' may need a different parent class - It occurred to me that not all viral processes are going to be directly involved in a symbiotic interaction. Two examples that are child classes of 'viral process' are 'viral life cycle' (GO:0019058) and 'virus maturation' (GO:0019075). I'm not entirely sure of a better parent class for it yet, but I'm leaning toward 'biological_process' being the best one.
|
process
|
viral process may need a different parent class it occurred to me that not all viral processes are going to be directly involved in a symbiotic interaction two examples that are child classes of viral process are viral life cycle go and virus maturation go i m not entirely sure of a better parent class for it yet but i m leaning toward biological process being the best one
| 1
|
41,892
| 5,398,621,237
|
IssuesEvent
|
2017-02-27 17:21:57
|
joehand/Joe-s-Habit-Tracker
|
https://api.github.com/repos/joehand/Joe-s-Habit-Tracker
|
closed
|
About Page? Other information
|
design/ui
|
Probably need something along these lines. Do I need a copyright? Can I add one even if I copied a lot of other code?
|
1.0
|
About Page? Other information - Probably need something along these lines. Do I need a copyright? Can I add one even if I copied a lot of other code?
|
non_process
|
about page other information probably need something along these lines do i need a copyright can i add one even if i copied a lot of other code
| 0
|
17,848
| 23,786,761,753
|
IssuesEvent
|
2022-09-02 10:51:21
|
Tencent/tdesign-miniprogram
|
https://api.github.com/repos/Tencent/tdesign-miniprogram
|
closed
|
[icon] 希望这里面可以直接使用第三方的图片
|
enhancement good first issue in process
|
### 这个功能解决了什么问题
因为在做一个新的项目的时候,往往会使整个项目的图标保持统一。
而且有的时候根据甲方的需求,图标可能还是渐变的颜色,图标样式也会很复杂,这个时候,使用TDesign提供的图标,是不够用的。
这个时候,需要UI去对每个图标进行制作。
在前段时间,都是把图标传到阿里的iconfont上,之后直接引用阿里的cdn链接,但是不久前,阿里的iconfont出现了一些问题,时隔很久之后才恢复功能,这大大的影响了开发进度。
所以,提供这样一个直接引用图片的功能是很必要的,这样的话,如果图标更换,就只需要更改服务器中的图片就可以了。
而且,我之前做小程序的时候,都是用的 vant 的组件,它是提供这个功能的,现在正在考虑技术方案的转型,如果这部分无法实现,我还需要想另外的对应策应该怎么办。
### 你建议的方案是什么
判断 API 中 name 的值是什么,如果是图标名称的话,则显示对应的图标,如果是链接的话,则显示所对应的图片。
|
1.0
|
[icon] 希望这里面可以直接使用第三方的图片 - ### 这个功能解决了什么问题
因为在做一个新的项目的时候,往往会使整个项目的图标保持统一。
而且有的时候根据甲方的需求,图标可能还是渐变的颜色,图标样式也会很复杂,这个时候,使用TDesign提供的图标,是不够用的。
这个时候,需要UI去对每个图标进行制作。
在前段时间,都是把图标传到阿里的iconfont上,之后直接引用阿里的cdn链接,但是不久前,阿里的iconfont出现了一些问题,时隔很久之后才恢复功能,这大大的影响了开发进度。
所以,提供这样一个直接引用图片的功能是很必要的,这样的话,如果图标更换,就只需要更改服务器中的图片就可以了。
而且,我之前做小程序的时候,都是用的 vant 的组件,它是提供这个功能的,现在正在考虑技术方案的转型,如果这部分无法实现,我还需要想另外的对应策应该怎么办。
### 你建议的方案是什么
判断 API 中 name 的值是什么,如果是图标名称的话,则显示对应的图标,如果是链接的话,则显示所对应的图片。
|
process
|
希望这里面可以直接使用第三方的图片 这个功能解决了什么问题 因为在做一个新的项目的时候,往往会使整个项目的图标保持统一。 而且有的时候根据甲方的需求,图标可能还是渐变的颜色,图标样式也会很复杂,这个时候,使用tdesign提供的图标,是不够用的。 这个时候,需要ui去对每个图标进行制作。 在前段时间,都是把图标传到阿里的iconfont上,之后直接引用阿里的cdn链接,但是不久前,阿里的iconfont出现了一些问题,时隔很久之后才恢复功能,这大大的影响了开发进度。 所以,提供这样一个直接引用图片的功能是很必要的,这样的话,如果图标更换,就只需要更改服务器中的图片就可以了。 而且,我之前做小程序的时候,都是用的 vant 的组件,它是提供这个功能的,现在正在考虑技术方案的转型,如果这部分无法实现,我还需要想另外的对应策应该怎么办。 你建议的方案是什么 判断 api 中 name 的值是什么,如果是图标名称的话,则显示对应的图标,如果是链接的话,则显示所对应的图片。
| 1
|
2,139
| 4,982,504,528
|
IssuesEvent
|
2016-12-07 11:35:01
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
opened
|
[subtitles] [FR] Affaires étrangères : deux poids deux mesures au Parlement européen
|
Language: French Process: [0] Awaiting subtitles
|
# Video title
Affaires étrangères : deux poids deux mesures au Parlement européen
# URL
https://www.youtube.com/watch?v=32mjOWZvm2Q&list=PLnAm9o_Xn_3DU-PS1cGkVoMthELDzFVRT&index=9
# Youtube subtitles language
Français
# Duration
1:15
# URL subtitles
https://www.youtube.com/timedtext_editor?lang=fr&ui=hd&v=32mjOWZvm2Q&ref=watch&action_mde_edit_form=1&tab=captions&bl=vmp
|
1.0
|
[subtitles] [FR] Affaires étrangères : deux poids deux mesures au Parlement européen - # Video title
Affaires étrangères : deux poids deux mesures au Parlement européen
# URL
https://www.youtube.com/watch?v=32mjOWZvm2Q&list=PLnAm9o_Xn_3DU-PS1cGkVoMthELDzFVRT&index=9
# Youtube subtitles language
Français
# Duration
1:15
# URL subtitles
https://www.youtube.com/timedtext_editor?lang=fr&ui=hd&v=32mjOWZvm2Q&ref=watch&action_mde_edit_form=1&tab=captions&bl=vmp
|
process
|
affaires étrangères deux poids deux mesures au parlement européen video title affaires étrangères deux poids deux mesures au parlement européen url youtube subtitles language français duration url subtitles
| 1
|
5,221
| 8,026,089,466
|
IssuesEvent
|
2018-07-27 01:46:06
|
w3c/html
|
https://api.github.com/repos/w3c/html
|
closed
|
CfC: Do not merge WebWorkers into HTML
|
process
|
This is a CfC on the proposal
> The Working Group should not merge Web Workers into the HTML specification. Note that this would overturn the decision made following the CfC #1090
Further discussion led to the proposal to revisit the decision. The main reasons given were
1. The expectation that merging it would improve the work rate looks unlikely to be true
1. Web Workers is not really linked to HTML and is not an HTML-only feature
Please respond to this CfC by adding a thumbs up, or a comment explaining (or adding a thumbs down to a comment explaining) why you disagree with the proposal, on or before 29 June 2018.
|
1.0
|
CfC: Do not merge WebWorkers into HTML - This is a CfC on the proposal
> The Working Group should not merge Web Workers into the HTML specification. Note that this would overturn the decision made following the CfC #1090
Further discussion led to the proposal to revisit the decision. The main reasons given were
1. The expectation that merging it would improve the work rate looks unlikely to be true
1. Web Workers is not really linked to HTML and is not an HTML-only feature
Please respond to this CfC by adding a thumbs up, or a comment explaining (or adding a thumbs down to a comment explaining) why you disagree with the proposal, on or before 29 June 2018.
|
process
|
cfc do not merge webworkers into html this is a cfc on the proposal the working group should not merge web workers into the html specification note that this would overturn the decision made following the cfc further discussion led to the proposal to revisit the decision the main reasons given were the expectation that merging it would improve the work rate looks unlikely to be true web workers is not really linked to html and is not an html only feature please respond to this cfc by adding a thumbs up or a comment explaining or adding a thumbs down to a comment explaining why you disagree with the proposal on or before june
| 1
|
336,460
| 24,499,815,847
|
IssuesEvent
|
2022-10-10 11:52:42
|
DuplosFidibuss/academic-time-planner
|
https://api.github.com/repos/DuplosFidibuss/academic-time-planner
|
closed
|
Cleanup current version of documentation
|
documentation
|
The multi-liners should be removed. Chapter 3 should be regrouped (see "Berichtstruktur_PA_BA"). Typos should be corrected.
|
1.0
|
Cleanup current version of documentation - The multi-liners should be removed. Chapter 3 should be regrouped (see "Berichtstruktur_PA_BA"). Typos should be corrected.
|
non_process
|
cleanup current version of documentation the multi liners should be removed chapter should be regrouped see berichtstruktur pa ba typos should be corrected
| 0
|
30,070
| 6,010,288,679
|
IssuesEvent
|
2017-06-06 12:52:02
|
bridgedotnet/Bridge
|
https://api.github.com/repos/bridgedotnet/Bridge
|
closed
|
Metadata from dependant libraries not included in output
|
defect
|
(I'm using code from `master`, which I believe includes the recent reflection changes)
I have two Bridge libraries, TestFramework and Tests. Tests has a reference to TestFramework. When TestFramework builds, it outputs a `.meta.js` file. However, this file never makes it to the output of Tests:
- `combineScripts: true` - Tests.js ends up with the reflection metadata for the Tests project, but not for the TestRunner project.
- `combineScripts: false` then Tests.meta.js has the reflection metadata for the Tests project, but there is no TestRunner.meta.js
So it seems that no matter what I do, I am unable to get the reflection metadata for the referenced project.
Is this a bug, or is there something else I need to do?
|
1.0
|
Metadata from dependant libraries not included in output - (I'm using code from `master`, which I believe includes the recent reflection changes)
I have two Bridge libraries, TestFramework and Tests. Tests has a reference to TestFramework. When TestFramework builds, it outputs a `.meta.js` file. However, this file never makes it to the output of Tests:
- `combineScripts: true` - Tests.js ends up with the reflection metadata for the Tests project, but not for the TestRunner project.
- `combineScripts: false` then Tests.meta.js has the reflection metadata for the Tests project, but there is no TestRunner.meta.js
So it seems that no matter what I do, I am unable to get the reflection metadata for the referenced project.
Is this a bug, or is there something else I need to do?
|
non_process
|
metadata from dependant libraries not included in output i m using code from master which i believe includes the recent reflection changes i have two bridge libraries testframework and tests tests has a reference to testframework when testframework builds it outputs a meta js file however this file never makes it to the output of tests combinescripts true tests js ends up with the reflection metadata for the tests project but not for the testrunner project combinescripts false then tests meta js has the reflection metadata for the tests project but there is no testrunner meta js so it seems that no matter what i do i am unable to get the reflection metadata for the referenced project is this a bug or is there something else i need to do
| 0
|
7,358
| 4,940,083,605
|
IssuesEvent
|
2016-11-29 15:58:57
|
vsch/idea-multimarkdown
|
https://api.github.com/repos/vsch/idea-multimarkdown
|
closed
|
Smooth scroll issues with preview
|
fix available fixed for next release library bug usability
|
I initially thought the rendering of the markdown preview was very slow, and fairly unusable, but it appears that it's the scrolling that is the issue. When scrolling with a trackpad (on osx) the preview is very sticky and painful to use (the editor tab for the raw markup meanwhile is fine).
|
True
|
Smooth scroll issues with preview - I initially thought the rendering of the markdown preview was very slow, and fairly unusable, but it appears that it's the scrolling that is the issue. When scrolling with a trackpad (on osx) the preview is very sticky and painful to use (the editor tab for the raw markup meanwhile is fine).
|
non_process
|
smooth scroll issues with preview i initially thought the rendering of the markdown preview was very slow and fairly unusable but it appears that it s the scrolling that is the issue when scrolling with a trackpad on osx the preview is very sticky and painful to use the editor tab for the raw markup meanwhile is fine
| 0
|
432,490
| 30,285,561,246
|
IssuesEvent
|
2023-07-08 16:27:37
|
tungbq/AWS-LearningResource
|
https://api.github.com/repos/tungbq/AWS-LearningResource
|
closed
|
Add AWS Cloudfront URLs
|
documentation help wanted good first issue aws
|
Append these lines for controltower service to `services.json`:
{
"service_name": "AWS Cloudfront",
"service_url": "https://docs.aws.amazon.com/cloudfront",
"service_youtube_url": "https://youtu.be/AT-nHW3_SVI",
"service_folder_name": "cloudfront"
}
|
1.0
|
Add AWS Cloudfront URLs - Append these lines for controltower service to `services.json`:
{
"service_name": "AWS Cloudfront",
"service_url": "https://docs.aws.amazon.com/cloudfront",
"service_youtube_url": "https://youtu.be/AT-nHW3_SVI",
"service_folder_name": "cloudfront"
}
|
non_process
|
add aws cloudfront urls append these lines for controltower service to services json service name aws cloudfront service url service youtube url service folder name cloudfront
| 0
|
22,087
| 30,609,368,754
|
IssuesEvent
|
2023-07-23 12:21:37
|
danrleypereira/verzel-pleno-prova
|
https://api.github.com/repos/danrleypereira/verzel-pleno-prova
|
opened
|
Setup CI/CD Pipeline with GitHub Actions, AWS EC2, and CodeDeploy
|
Processo Seletivo DevOps
|
Proposed Solution
Implement a CI/CD pipeline using GitHub Actions for integration, AWS EC2 for the deployment environment, and AWS CodeDeploy for deployment automation.
The proposed workflow should include the following steps:
Code Commit: Developers commit the code to the GitHub repository. Any new commit to the main branch triggers the CI/CD pipeline.
Build: GitHub Actions initiates the build process and runs the test cases.
CodeDeploy Deployment: If the build and tests are successful, GitHub Actions triggers a deployment process using AWS CodeDeploy.
Deployment: AWS CodeDeploy deploys the latest version of the application to an AWS EC2 instance. The successful deployment results in the latest version of the application running in EC2.
Acceptance Criteria
A commit to the main branch should trigger the pipeline automatically.
The pipeline should successfully build the application and run all the test cases.
If the build or tests fail, the pipeline should halt and notify the team.
If the build and tests are successful, AWS CodeDeploy should automatically deploy the application to AWS EC2.
On successful deployment, the team should be notified.
Tasks
Create an IAM role with the necessary permissions for AWS CodeDeploy and EC2.
Setup AWS CodeDeploy to deploy to the EC2 instance.
Write GitHub Actions workflow file for CI/CD.
Test the pipeline by making a commit to the main branch.
Document the pipeline setup and workflow for future reference.
|
1.0
|
Setup CI/CD Pipeline with GitHub Actions, AWS EC2, and CodeDeploy - Proposed Solution
Implement a CI/CD pipeline using GitHub Actions for integration, AWS EC2 for the deployment environment, and AWS CodeDeploy for deployment automation.
The proposed workflow should include the following steps:
Code Commit: Developers commit the code to the GitHub repository. Any new commit to the main branch triggers the CI/CD pipeline.
Build: GitHub Actions initiates the build process and runs the test cases.
CodeDeploy Deployment: If the build and tests are successful, GitHub Actions triggers a deployment process using AWS CodeDeploy.
Deployment: AWS CodeDeploy deploys the latest version of the application to an AWS EC2 instance. The successful deployment results in the latest version of the application running in EC2.
Acceptance Criteria
A commit to the main branch should trigger the pipeline automatically.
The pipeline should successfully build the application and run all the test cases.
If the build or tests fail, the pipeline should halt and notify the team.
If the build and tests are successful, AWS CodeDeploy should automatically deploy the application to AWS EC2.
On successful deployment, the team should be notified.
Tasks
Create an IAM role with the necessary permissions for AWS CodeDeploy and EC2.
Setup AWS CodeDeploy to deploy to the EC2 instance.
Write GitHub Actions workflow file for CI/CD.
Test the pipeline by making a commit to the main branch.
Document the pipeline setup and workflow for future reference.
|
process
|
setup ci cd pipeline with github actions aws and codedeploy proposed solution implement a ci cd pipeline using github actions for integration aws for the deployment environment and aws codedeploy for deployment automation the proposed workflow should include the following steps code commit developers commit the code to the github repository any new commit to the main branch triggers the ci cd pipeline build github actions initiates the build process and runs the test cases codedeploy deployment if the build and tests are successful github actions triggers a deployment process using aws codedeploy deployment aws codedeploy deploys the latest version of the application to an aws instance the successful deployment results in the latest version of the application running in acceptance criteria a commit to the main branch should trigger the pipeline automatically the pipeline should successfully build the application and run all the test cases if the build or tests fail the pipeline should halt and notify the team if the build and tests are successful aws codedeploy should automatically deploy the application to aws on successful deployment the team should be notified tasks create an iam role with the necessary permissions for aws codedeploy and setup aws codedeploy to deploy to the instance write github actions workflow file for ci cd test the pipeline by making a commit to the main branch document the pipeline setup and workflow for future reference
| 1
|
18,637
| 24,580,537,189
|
IssuesEvent
|
2022-10-13 15:18:01
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Mobile apps][FHIR] JSON structure > Inconsistency between iOS and android
|
Bug P1 iOS Android Process: Fixed Process: Tested dev
|
Steps:
1. Sign up or sign in to the mobile app (Both android and iOS)
2. Enroll to the study
3. Click on the form step questions
4. Click on the Skip button and submit the response
5. Go to the FHIR store and observe
AR: There is an inconsistency between both iOS JSON and Android JSON in FHIR
ER: There should be a consistency between both android and iOS
**iOS JSON structure**
```
{
"authored": "2022-09-22T14:55:29.717+05:30",
"id": "bce06a2a-783d-4371-8a56-dede90fed2f5",
"identifier": {
"type": {
"text": "questionnaire"
},
"use": "official",
"value": "CopyOfImportedM@2c9080af82e94ef30182eee70f2b0016@5380317f-5659-4182-ad34-3aa4c5163a9f@q6@1"
},
"item": [
{
"definition": "scale",
"linkId": "Step1_Scale",
"text": "Question Scale"
},
{
"definition": "continuousScale",
"linkId": "Short2_CS",
"text": "Question CS"
},
{
"definition": "textScale",
"linkId": "Short3_TextScal",
"text": "Question Text scale"
},
{
"definition": "valuePicker",
"linkId": "short4_VP",
"text": "Question VP"
},
{
"definition": "textChoice",
"linkId": "short6_TC",
"text": "Question TC"
},
{
"definition": "boolean",
"linkId": "short_Boolean",
"text": "Question boolean"
},
{
"definition": "numeric",
"linkId": "short_Numeric",
"text": "Question Numeric"
},
{
"definition": "timeOfDay",
"linkId": "short_",
"text": "time of the day"
},
{
"definition": "date",
"linkId": "short_date",
"text": "question date"
},
{
"definition": "text",
"linkId": "text",
"text": "text"
},
{
"definition": "email",
"linkId": "email",
"text": "email"
},
{
"definition": "timeInterval",
"linkId": "timeinterval_sh",
"text": "time interval"
},
{
"definition": "height",
"linkId": "height",
"text": "height"
},
{
"definition": "location",
"linkId": "loc",
"text": "location"
},
{
"definition": "grouped",
"item": [
{
"definition": "scale",
"linkId": "formscale",
"text": "Scale"
},
{
"definition": "continuousScale",
"linkId": "CS_STK",
"text": "CS?"
},
{
"definition": "textScale",
"linkId": "ts_stk",
"text": "ts"
},
{
"definition": "valuePicker",
"linkId": "vp_sk",
"text": "vp"
},
{
"definition": "textChoice",
"linkId": "TC_SK",
"text": "Tc"
},
{
"definition": "boolean",
"linkId": "boolean",
"text": "boolean Question"
},
{
"definition": "numeric",
"linkId": "numeric_qsk",
"text": "numeric"
},
{
"definition": "timeOfDay",
"linkId": "Timeoftheday_QS",
"text": "timeoftheday_quest"
},
{
"definition": "date",
"linkId": "date",
"text": "date"
},
{
"definition": "text",
"linkId": "text_stk",
"text": "text"
},
{
"definition": "email",
"linkId": "email_qst",
"text": "Email"
},
{
"definition": "timeInterval",
"linkId": "TI",
"text": "TI"
},
{
"definition": "height",
"linkId": "Height_ST",
"text": "Height"
},
{
"definition": "location",
"linkId": "location+STK",
"text": "location"
}
],
"linkId": "formsteptitle",
"text": "formsteptitle"
}
],
"meta": {
"lastUpdated": "2022-09-22T09:24:45.559365+00:00",
"versionId": "MTY2MzgzODY4NTU1OTM2NTAwMA"
},
"questionnaire": "/fhirStores/FHIR_CopyOfImportedM/fhir/Questionnaire/b9c41fe7-d7b0-4432-ac30-e83f5737a352/_history/MTY2MjExODUyNTU1OTg1MDAwMA",
"resourceType": "QuestionnaireResponse",
"source": {
"reference": "Patient/a43750eb-d869-464e-a5a6-ddb7c291d7c4",
"type": "Patient"
},
"status": "completed"
}
```
**Android JSON structure**
```
{
"authored": "2022-09-22T17:11:53.775+05:30",
"id": "bef3c00f-10f5-4906-8ad0-4aa90c78ec5a",
"identifier": {
"type": {
"text": "questionnaire"
},
"use": "official",
"value": "CopyOfImportedM@2c9080af82e94ef30182eee70f2b0016@f0898c65-4a4c-4556-8b2a-f1de00ae81c7@q6@1"
},
"item": [
{
"definition": "scale",
"linkId": "Step1_Scale",
"text": "Question Scale"
},
{
"definition": "continuousScale",
"linkId": "Short2_CS",
"text": "Question CS"
},
{
"definition": "textScale",
"linkId": "Short3_TextScal",
"text": "Question Text scale"
},
{
"definition": "valuePicker",
"linkId": "short4_VP",
"text": "Question VP"
},
{
"definition": "textChoice",
"linkId": "short6_TC",
"text": "Question TC"
},
{
"definition": "boolean",
"linkId": "short_Boolean",
"text": "Question boolean"
},
{
"definition": "numeric",
"linkId": "short_Numeric",
"text": "Question Numeric"
},
{
"definition": "timeOfDay",
"linkId": "short_",
"text": "time of the day"
},
{
"definition": "date",
"linkId": "short_date",
"text": "question date"
},
{
"definition": "text",
"linkId": "text",
"text": "text"
},
{
"definition": "email",
"linkId": "email",
"text": "email"
},
{
"definition": "timeInterval",
"linkId": "timeinterval_sh",
"text": "time interval"
},
{
"definition": "height",
"linkId": "height",
"text": "height"
},
{
"definition": "location",
"linkId": "loc",
"text": "location"
},
{
"definition": "grouped",
"linkId": "formsteptitle",
"text": "formsteptitle"
}
],
"meta": {
"lastUpdated": "2022-09-22T11:42:01.373555+00:00",
"versionId": "MTY2Mzg0NjkyMTM3MzU1NTAwMA"
},
"questionnaire": "/fhirStores/FHIR_CopyOfImportedM/fhir/Questionnaire/b9c41fe7-d7b0-4432-ac30-e83f5737a352/_history/MTY2MjExODUyNTU1OTg1MDAwMA",
"resourceType": "QuestionnaireResponse",
"source": {
"reference": "Patient/006ec87b-6f61-4000-858c-d1a0fe508e0f",
"type": "Patient"
},
"status": "completed"
}
```
|
2.0
|
[Mobile apps][FHIR] JSON structure > Inconsistency between iOS and android - Steps:
1. Sign up or sign in to the mobile app (Both android and iOS)
2. Enroll to the study
3. Click on the form step questions
4. Click on the Skip button and submit the response
5. Go to the FHIR store and observe
AR: There is an inconsistency between both iOS JSON and Android JSON in FHIR
ER: There should be a consistency between both android and iOS
**iOS JSON structure**
```
{
"authored": "2022-09-22T14:55:29.717+05:30",
"id": "bce06a2a-783d-4371-8a56-dede90fed2f5",
"identifier": {
"type": {
"text": "questionnaire"
},
"use": "official",
"value": "CopyOfImportedM@2c9080af82e94ef30182eee70f2b0016@5380317f-5659-4182-ad34-3aa4c5163a9f@q6@1"
},
"item": [
{
"definition": "scale",
"linkId": "Step1_Scale",
"text": "Question Scale"
},
{
"definition": "continuousScale",
"linkId": "Short2_CS",
"text": "Question CS"
},
{
"definition": "textScale",
"linkId": "Short3_TextScal",
"text": "Question Text scale"
},
{
"definition": "valuePicker",
"linkId": "short4_VP",
"text": "Question VP"
},
{
"definition": "textChoice",
"linkId": "short6_TC",
"text": "Question TC"
},
{
"definition": "boolean",
"linkId": "short_Boolean",
"text": "Question boolean"
},
{
"definition": "numeric",
"linkId": "short_Numeric",
"text": "Question Numeric"
},
{
"definition": "timeOfDay",
"linkId": "short_",
"text": "time of the day"
},
{
"definition": "date",
"linkId": "short_date",
"text": "question date"
},
{
"definition": "text",
"linkId": "text",
"text": "text"
},
{
"definition": "email",
"linkId": "email",
"text": "email"
},
{
"definition": "timeInterval",
"linkId": "timeinterval_sh",
"text": "time interval"
},
{
"definition": "height",
"linkId": "height",
"text": "height"
},
{
"definition": "location",
"linkId": "loc",
"text": "location"
},
{
"definition": "grouped",
"item": [
{
"definition": "scale",
"linkId": "formscale",
"text": "Scale"
},
{
"definition": "continuousScale",
"linkId": "CS_STK",
"text": "CS?"
},
{
"definition": "textScale",
"linkId": "ts_stk",
"text": "ts"
},
{
"definition": "valuePicker",
"linkId": "vp_sk",
"text": "vp"
},
{
"definition": "textChoice",
"linkId": "TC_SK",
"text": "Tc"
},
{
"definition": "boolean",
"linkId": "boolean",
"text": "boolean Question"
},
{
"definition": "numeric",
"linkId": "numeric_qsk",
"text": "numeric"
},
{
"definition": "timeOfDay",
"linkId": "Timeoftheday_QS",
"text": "timeoftheday_quest"
},
{
"definition": "date",
"linkId": "date",
"text": "date"
},
{
"definition": "text",
"linkId": "text_stk",
"text": "text"
},
{
"definition": "email",
"linkId": "email_qst",
"text": "Email"
},
{
"definition": "timeInterval",
"linkId": "TI",
"text": "TI"
},
{
"definition": "height",
"linkId": "Height_ST",
"text": "Height"
},
{
"definition": "location",
"linkId": "location+STK",
"text": "location"
}
],
"linkId": "formsteptitle",
"text": "formsteptitle"
}
],
"meta": {
"lastUpdated": "2022-09-22T09:24:45.559365+00:00",
"versionId": "MTY2MzgzODY4NTU1OTM2NTAwMA"
},
"questionnaire": "/fhirStores/FHIR_CopyOfImportedM/fhir/Questionnaire/b9c41fe7-d7b0-4432-ac30-e83f5737a352/_history/MTY2MjExODUyNTU1OTg1MDAwMA",
"resourceType": "QuestionnaireResponse",
"source": {
"reference": "Patient/a43750eb-d869-464e-a5a6-ddb7c291d7c4",
"type": "Patient"
},
"status": "completed"
}
```
**Android JSON structure**
```
{
"authored": "2022-09-22T17:11:53.775+05:30",
"id": "bef3c00f-10f5-4906-8ad0-4aa90c78ec5a",
"identifier": {
"type": {
"text": "questionnaire"
},
"use": "official",
"value": "CopyOfImportedM@2c9080af82e94ef30182eee70f2b0016@f0898c65-4a4c-4556-8b2a-f1de00ae81c7@q6@1"
},
"item": [
{
"definition": "scale",
"linkId": "Step1_Scale",
"text": "Question Scale"
},
{
"definition": "continuousScale",
"linkId": "Short2_CS",
"text": "Question CS"
},
{
"definition": "textScale",
"linkId": "Short3_TextScal",
"text": "Question Text scale"
},
{
"definition": "valuePicker",
"linkId": "short4_VP",
"text": "Question VP"
},
{
"definition": "textChoice",
"linkId": "short6_TC",
"text": "Question TC"
},
{
"definition": "boolean",
"linkId": "short_Boolean",
"text": "Question boolean"
},
{
"definition": "numeric",
"linkId": "short_Numeric",
"text": "Question Numeric"
},
{
"definition": "timeOfDay",
"linkId": "short_",
"text": "time of the day"
},
{
"definition": "date",
"linkId": "short_date",
"text": "question date"
},
{
"definition": "text",
"linkId": "text",
"text": "text"
},
{
"definition": "email",
"linkId": "email",
"text": "email"
},
{
"definition": "timeInterval",
"linkId": "timeinterval_sh",
"text": "time interval"
},
{
"definition": "height",
"linkId": "height",
"text": "height"
},
{
"definition": "location",
"linkId": "loc",
"text": "location"
},
{
"definition": "grouped",
"linkId": "formsteptitle",
"text": "formsteptitle"
}
],
"meta": {
"lastUpdated": "2022-09-22T11:42:01.373555+00:00",
"versionId": "MTY2Mzg0NjkyMTM3MzU1NTAwMA"
},
"questionnaire": "/fhirStores/FHIR_CopyOfImportedM/fhir/Questionnaire/b9c41fe7-d7b0-4432-ac30-e83f5737a352/_history/MTY2MjExODUyNTU1OTg1MDAwMA",
"resourceType": "QuestionnaireResponse",
"source": {
"reference": "Patient/006ec87b-6f61-4000-858c-d1a0fe508e0f",
"type": "Patient"
},
"status": "completed"
}
```
|
process
|
json structure inconsistency between ios and android steps sign up or sign in to the mobile app both android and ios enroll to the study click on the form step questions click on the skip button and submit the response go to the fhir store and observe ar there is an inconsistency between both ios json and android json in fhir er there should be a consistency between both android and ios ios json structure authored id identifier type text questionnaire use official value copyofimportedm item definition scale linkid scale text question scale definition continuousscale linkid cs text question cs definition textscale linkid textscal text question text scale definition valuepicker linkid vp text question vp definition textchoice linkid tc text question tc definition boolean linkid short boolean text question boolean definition numeric linkid short numeric text question numeric definition timeofday linkid short text time of the day definition date linkid short date text question date definition text linkid text text text definition email linkid email text email definition timeinterval linkid timeinterval sh text time interval definition height linkid height text height definition location linkid loc text location definition grouped item definition scale linkid formscale text scale definition continuousscale linkid cs stk text cs definition textscale linkid ts stk text ts definition valuepicker linkid vp sk text vp definition textchoice linkid tc sk text tc definition boolean linkid boolean text boolean question definition numeric linkid numeric qsk text numeric definition timeofday linkid timeoftheday qs text timeoftheday quest definition date linkid date text date definition text linkid text stk text text definition email linkid email qst text email definition timeinterval linkid ti text ti definition height linkid height st text height definition location linkid location stk text location linkid formsteptitle text formsteptitle meta lastupdated versionid questionnaire fhirstores fhir copyofimportedm fhir questionnaire history resourcetype questionnaireresponse source reference patient type patient status completed android json structure authored id identifier type text questionnaire use official value copyofimportedm item definition scale linkid scale text question scale definition continuousscale linkid cs text question cs definition textscale linkid textscal text question text scale definition valuepicker linkid vp text question vp definition textchoice linkid tc text question tc definition boolean linkid short boolean text question boolean definition numeric linkid short numeric text question numeric definition timeofday linkid short text time of the day definition date linkid short date text question date definition text linkid text text text definition email linkid email text email definition timeinterval linkid timeinterval sh text time interval definition height linkid height text height definition location linkid loc text location definition grouped linkid formsteptitle text formsteptitle meta lastupdated versionid questionnaire fhirstores fhir copyofimportedm fhir questionnaire history resourcetype questionnaireresponse source reference patient type patient status completed
| 1
|
726,626
| 25,005,348,274
|
IssuesEvent
|
2022-11-03 11:22:19
|
zitadel/zitadel
|
https://api.github.com/repos/zitadel/zitadel
|
closed
|
Document Metrics Endpoint
|
category: docs priority: medium effort: 2
|
The metrics endpoint should be integrated in the documentation.
**Acceptance criteria**
- [ ] Add metrics endpoint
- [ ] How can customers access data from the metrics
- [ ] What kind of metrics are included
- [ ] Document how users know if ZITADEL is running, healthz endpoint, readiness, etc.
|
1.0
|
Document Metrics Endpoint - The metrics endpoint should be integrated in the documentation.
**Acceptance criteria**
- [ ] Add metrics endpoint
- [ ] How can customers access data from the metrics
- [ ] What kind of metrics are included
- [ ] Document how users know if ZITADEL is running, healthz endpoint, readiness, etc.
|
non_process
|
document metrics endpoint the metrics endpoint should be integrated in the documentation acceptance criteria add metrics endpoint how can customers access data from the metrics what kind of metrics are included document how users know if zitadel is running healthz endpoint readiness etc
| 0
|
323,896
| 9,880,151,133
|
IssuesEvent
|
2019-06-24 11:54:53
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
dsp.aarth.net - site is not usable
|
browser-firefox engine-gecko priority-normal
|
<!-- @browser: Firefox 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:64.0) Gecko/20100101 Firefox/64.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://dsp.aarth.net/?cid=6zoJN&sub1=184133&sub2=176110&sub3=157.41.66.251&sub4=1775813&cb=@cb
**Browser / Version**: Firefox 64.0
**Operating System**: Windows 10
**Tested Another Browser**: Unknown
**Problem type**: Site is not usable
**Description**: it was always shows when i suffering a site it automatically opened through new tab
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/6/e78b6ad1-a911-473d-aa24-d21cb2b2f204.jpeg)
[](https://webcompat.com/uploads/2019/6/fc9631b1-c1c2-40ad-b479-2156e06a3136.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190108160530</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: release</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Error: "The character encoding of the HTML document was not declared. The document will render with garbled text in some browser configurations if the document contains characters from outside the US-ASCII range. The character encoding of the page must be declared in the document or in the transfer protocol." {file: "https://dsp.aarth.net/?cid=6zoJN&sub1=184133&sub2=176110&sub3=157.41.66.251&sub4=1775813&cb=@cb" line: 0}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
dsp.aarth.net - site is not usable - <!-- @browser: Firefox 64.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:64.0) Gecko/20100101 Firefox/64.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://dsp.aarth.net/?cid=6zoJN&sub1=184133&sub2=176110&sub3=157.41.66.251&sub4=1775813&cb=@cb
**Browser / Version**: Firefox 64.0
**Operating System**: Windows 10
**Tested Another Browser**: Unknown
**Problem type**: Site is not usable
**Description**: it was always shows when i suffering a site it automatically opened through new tab
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/6/e78b6ad1-a911-473d-aa24-d21cb2b2f204.jpeg)
[](https://webcompat.com/uploads/2019/6/fc9631b1-c1c2-40ad-b479-2156e06a3136.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190108160530</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: release</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Error: "The character encoding of the HTML document was not declared. The document will render with garbled text in some browser configurations if the document contains characters from outside the US-ASCII range. The character encoding of the page must be declared in the document or in the transfer protocol." {file: "https://dsp.aarth.net/?cid=6zoJN&sub1=184133&sub2=176110&sub3=157.41.66.251&sub4=1775813&cb=@cb" line: 0}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
dsp aarth net site is not usable url browser version firefox operating system windows tested another browser unknown problem type site is not usable description it was always shows when i suffering a site it automatically opened through new tab steps to reproduce browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel release console messages from with ❤️
| 0
|
505,944
| 14,655,103,432
|
IssuesEvent
|
2020-12-28 10:13:46
|
microsoft/terminal
|
https://api.github.com/repos/microsoft/terminal
|
closed
|
[Functional-Open dropdown]- User is not able to perform action using shortcuts defined for the dropdown menu items.
|
Area-User Interface HCL-E+D HCL-WindowsConsole HCL-WindowsTerminal Issue-Bug Priority-0 Product-Terminal
|
**User Experience:**
Users who rely on screen reader/Keyboard will not be able to perform action efficiently if shortcuts are not working for the dropdown menu items.
**Test Environment:**
OS: Win 10 2004 Build (20262.1010)
App: Windows Terminal
App version: 1.4.3243.0
**Repro Steps:**
1. Open Windows terminal app
2. Press "Ctrl+shift+Space" key to open the dropdown control.
3. Navigate to the Menu items.
4. Now press shortcut key defined for menu item say "Ctrl+," (which is defined for settings menu item).
5. Observe the issue.
**Actual Result:**
User is not able to perform action using shortcuts defined for the dropdown menu items.
Pressing shortcut key defined for menu items, nothing is getting activated.
**Expected Result:**
User should be able to perform action using shortcuts defined for the dropdown menu items.
Pressing shortcut key "Ctrl+," settings should get open.
[Functional_dropdown_menu keys not working.zip](https://github.com/microsoft/terminal/files/5629395/Functional_dropdown_menu.keys.not.working.zip)
|
1.0
|
[Functional-Open dropdown]- User is not able to perform action using shortcuts defined for the dropdown menu items. - **User Experience:**
Users who rely on screen reader/Keyboard will not be able to perform action efficiently if shortcuts are not working for the dropdown menu items.
**Test Environment:**
OS: Win 10 2004 Build (20262.1010)
App: Windows Terminal
App version: 1.4.3243.0
**Repro Steps:**
1. Open Windows terminal app
2. Press "Ctrl+shift+Space" key to open the dropdown control.
3. Navigate to the Menu items.
4. Now press shortcut key defined for menu item say "Ctrl+," (which is defined for settings menu item).
5. Observe the issue.
**Actual Result:**
User is not able to perform action using shortcuts defined for the dropdown menu items.
Pressing shortcut key defined for menu items, nothing is getting activated.
**Expected Result:**
User should be able to perform action using shortcuts defined for the dropdown menu items.
Pressing shortcut key "Ctrl+," settings should get open.
[Functional_dropdown_menu keys not working.zip](https://github.com/microsoft/terminal/files/5629395/Functional_dropdown_menu.keys.not.working.zip)
|
non_process
|
user is not able to perform action using shortcuts defined for the dropdown menu items user experience users who rely on screen reader keyboard will not be able to perform action efficiently if shortcuts are not working for the dropdown menu items test environment os win build app windows terminal app version repro steps open windows terminal app press ctrl shift space key to open the dropdown control navigate to the menu items now press shortcut key defined for menu item say ctrl which is defined for settings menu item observe the issue actual result user is not able to perform action using shortcuts defined for the dropdown menu items pressing shortcut key defined for menu items nothing is getting activated expected result user should be able to perform action using shortcuts defined for the dropdown menu items pressing shortcut key ctrl settings should get open
| 0
|
2,077
| 4,892,161,095
|
IssuesEvent
|
2016-11-18 18:52:13
|
Sage-Bionetworks/Genie
|
https://api.github.com/repos/Sage-Bionetworks/Genie
|
opened
|
remove samples not marked 'keep' in variant merge list
|
data processing DFCI GRCC JHH MDA MSK NKI UHN VICC
|
Groups that want to keep variants will submit new mutation files.
|
1.0
|
remove samples not marked 'keep' in variant merge list - Groups that want to keep variants will submit new mutation files.
|
process
|
remove samples not marked keep in variant merge list groups that want to keep variants will submit new mutation files
| 1
|
38,524
| 12,552,413,166
|
IssuesEvent
|
2020-06-06 18:00:45
|
nihalmurmu/automata
|
https://api.github.com/repos/nihalmurmu/automata
|
opened
|
CVE-2020-7656 (Medium) detected in jquery-1.7.1.min.js
|
security vulnerability
|
## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/automata/node_modules/sockjs/examples/hapi/html/index.html</p>
<p>Path to vulnerable library: /automata/node_modules/sockjs/examples/hapi/html/index.html,/automata/node_modules/sockjs/examples/echo/index.html,/automata/node_modules/sockjs/examples/multiplex/index.html,/automata/node_modules/sockjs/examples/express-3.x/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nihalmurmu/automata/commit/d287137dec49930597b48a147ee39781c3da3e66">d287137dec49930597b48a147ee39781c3da3e66</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7656">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7656</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: 1.9.0b1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7656 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2020-7656 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/automata/node_modules/sockjs/examples/hapi/html/index.html</p>
<p>Path to vulnerable library: /automata/node_modules/sockjs/examples/hapi/html/index.html,/automata/node_modules/sockjs/examples/echo/index.html,/automata/node_modules/sockjs/examples/multiplex/index.html,/automata/node_modules/sockjs/examples/express-3.x/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/nihalmurmu/automata/commit/d287137dec49930597b48a147ee39781c3da3e66">d287137dec49930597b48a147ee39781c3da3e66</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jquery prior to 1.9.0 allows Cross-site Scripting attacks via the load method. The load method fails to recognize and remove "<script>" HTML tags that contain a whitespace character, i.e: "</script >", which results in the enclosed script logic to be executed.
<p>Publish Date: 2020-05-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7656>CVE-2020-7656</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7656">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7656</a></p>
<p>Release Date: 2020-05-19</p>
<p>Fix Resolution: 1.9.0b1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm automata node modules sockjs examples hapi html index html path to vulnerable library automata node modules sockjs examples hapi html index html automata node modules sockjs examples echo index html automata node modules sockjs examples multiplex index html automata node modules sockjs examples express x index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery prior to allows cross site scripting attacks via the load method the load method fails to recognize and remove html tags that contain a whitespace character i e which results in the enclosed script logic to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
21,153
| 28,129,560,119
|
IssuesEvent
|
2023-03-31 21:07:45
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Big xslx file downloaded from BigQuery are corrupted in v0.45.3 and were not in v0.45.1
|
Type:Bug Priority:P1 .Performance Database/BigQuery Reporting/Export .Regression .Team/QueryProcessor :hammer_and_wrench:
|
**Describe the bug**
Big xslx file downloaded from BigQuery are corrupted in v0.45.3, and were okay in v0.45.1 (see screenshots)
**Logs**
Please include JavaScript console logs (open the browser dev tools and check the "console" tab) as well as the server logs (Metabase logs that you can get from Settings -> Admin -> Troubleshooting -> Logs) around the time this bug occurred. For information about how to get these, consult our [bug troubleshooting guide](https://metabase.com/docs/latest/troubleshooting-guide/bugs.html)
JS console logs:
<details>
```
DownloadButton.jsx:60 POST https://XXX/api/dataset/xlsx 502
(anonymous) @ DownloadButton.jsx:60
u @ runtime.js:63
(anonymous) @ runtime.js:293
(anonymous) @ runtime.js:118
b @ DownloadButton.styled.tsx:4
a @ DownloadButton.styled.tsx:4
(anonymous) @ DownloadButton.styled.tsx:4
(anonymous) @ DownloadButton.styled.tsx:4
(anonymous) @ DownloadButton.jsx:36
onSubmit @ DownloadButton.jsx:96
c @ react-dom.production.min.js:14
f @ react-dom.production.min.js:14
(anonymous) @ react-dom.production.min.js:14
M @ react-dom.production.min.js:15
ct @ react-dom.production.min.js:52
at @ react-dom.production.min.js:51
lt @ react-dom.production.min.js:52
bt @ react-dom.production.min.js:56
N @ react-dom.production.min.js:287
F @ react-dom.production.min.js:19
en @ react-dom.production.min.js:70
Qt @ react-dom.production.min.js:69
t.unstable_runWithPriority @ scheduler.production.min.js:19
Yi @ react-dom.production.min.js:122
D @ react-dom.production.min.js:287
Zt @ react-dom.production.min.js:68
```
</details>
Server logs:
- Not sure it's related, infinity of logs as:
<details>
```
472
2023-03-13 14:37:16,836 ERROR middleware.catch-exceptions :: Error processing query: Error running query
471
{:database_id 3,
470
:started_at #t "2023-03-13T14:27:10.463072Z[GMT]",
469
:via
468
[{:status :failed,
467
:class clojure.lang.ExceptionInfo,
466
:error "Error executing query: null",
465
:stacktrace
464
["--> driver.bigquery_cloud_sdk$throw_invalid_query.invokeStatic(bigquery_cloud_sdk.clj:173)"
463
"driver.bigquery_cloud_sdk$throw_invalid_query.invoke(bigquery_cloud_sdk.clj:172)"
462
"driver.bigquery_cloud_sdk$execute_bigquery.invokeStatic(bigquery_cloud_sdk.clj:215)"
461
"driver.bigquery_cloud_sdk$execute_bigquery.invoke(bigquery_cloud_sdk.clj:177)"
460
"driver.bigquery_cloud_sdk$execute_bigquery_on_db.invokeStatic(bigquery_cloud_sdk.clj:219)"
459
"driver.bigquery_cloud_sdk$execute_bigquery_on_db.invoke(bigquery_cloud_sdk.clj:217)"
458
"driver.bigquery_cloud_sdk$process_native_STAR_$thunk__84260.invoke(bigquery_cloud_sdk.clj:279)"
457
"driver.bigquery_cloud_sdk$process_native_STAR_.invokeStatic(bigquery_cloud_sdk.clj:291)"
456
"driver.bigquery_cloud_sdk$process_native_STAR_.invoke(bigquery_cloud_sdk.clj:271)"
455
"driver.bigquery_cloud_sdk$fn__84266.invokeStatic(bigquery_cloud_sdk.clj:307)"
454
"driver.bigquery_cloud_sdk$fn__84266.invoke(bigquery_cloud_sdk.clj:299)"
453
"query_processor.context$executef.invokeStatic(context.clj:59)"
452
"query_processor.context$executef.invoke(context.clj:48)"
451
"query_processor.context.default$default_runf.invokeStatic(default.clj:67)"
450
"query_processor.context.default$default_runf.invoke(default.clj:65)"
449
"query_processor.context$runf.invokeStatic(context.clj:45)"
448
"query_processor.context$runf.invoke(context.clj:39)"
447
"query_processor.reducible$identity_qp.invokeStatic(reducible.clj:12)"
446
"query_processor.reducible$identity_qp.invoke(reducible.clj:9)"
445
"query_processor.middleware.cache$maybe_return_cached_results$maybe_return_cached_results_STAR___53600.invoke(cache.clj:220)"
444
"query_processor.middleware.permissions$check_query_permissions$fn__49259.invoke(permissions.clj:109)"
443
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__52545.invoke(mbql_to_native.clj:23)"
442
"query_processor$fn__55076$combined_post_process__55081$combined_post_process_STAR___55082.invoke(query_processor.clj:212)"
441
"query_processor$fn__55076$combined_pre_process__55077$combined_pre_process_STAR___55078.invoke(query_processor.clj:209)"
440
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__53521$fn__53526.invoke(resolve_database_and_driver.clj:35)"
439
"driver$do_with_driver.invokeStatic(driver.clj:76)"
438
"driver$do_with_driver.invoke(driver.clj:72)"
437
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__53521.invoke(resolve_database_and_driver.clj:34)"
436
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__49525.invoke(fetch_source_query.clj:314)"
435
"query_processor.middleware.store$initialize_store$fn__49715$fn__49716.invoke(store.clj:11)"
434
"query_processor.store$do_with_store.invokeStatic(store.clj:45)"
433
"query_processor.store$do_with_store.invoke(store.clj:39)"
432
"query_processor.middleware.store$initialize_store$fn__49715.invoke(store.clj:10)"
431
"query_processor.middleware.normalize_query$normalize$fn__53793.invoke(normalize_query.clj:22)"
430
"query_processor.middleware.constraints$add_default_userland_constraints$fn__50803.invoke(constraints.clj:53)"
429
"query_processor.middleware.process_userland_query$process_userland_query$fn__53732.invoke(process_userland_query.clj:145)"
428
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__54104.invoke(catch_exceptions.clj:167)"
427
"query_processor.reducible$async_qp$qp_STAR___45514$thunk__45516.invoke(reducible.clj:100)"
426
"query_processor.reducible$async_qp$qp_STAR___45514$fn__45518.invoke(reducible.clj:105)"],
425
:error_type :invalid-query,
424
:ex-data
423
{:type :invalid-query,
422
:sql
421
"-- Metabase:: userID: 3 queryType: MBQL queryHash: 49de1635bad2646028954eb16d8b2e9e8c540a52a07ed282b073489b643a3f36\nSELECT `invoicing.invoicing_venues`.`transaction_id` AS `transaction_id`, `invoicing.invoicing_venues`.`stripe_account_id` AS `stripe_account_id`, `invoicing.invoicing_venues`.`payout_id` AS `payout_id`, `invoicing.invoicing_venues`.`payout_created_local` AS `payout_created_local`, `invoicing.invoicing_venues`.`payout_balance_transaction` AS `payout_balance_transaction`, `invoicing.invoicing_venues`.`payout_arrival_date` AS `payout_arrival_date`, `invoicing.invoicing_venues`.`payout_arrival_date_format` AS `payout_arrival_date_format`, `invoicing.invoicing_venues`.`payout_currency` AS `payout_currency`, `invoicing.invoicing_venues`.`payout_amount` AS `payout_amount`, `invoicing.invoicing_venues`.`payout_destination` AS `payout_destination`, `invoicing.invoicing_venues`.`payout_statement_descriptor` AS `payout_statement_descriptor`, `invoicing.invoicing_venues`.`payout_status` AS `payout_status`, `invoicing.invoicing_venues`.`payout_failure_code` AS `payout_failure_code`, `invoicing.invoicing_venues`.`transaction_amount` AS `transaction_amount`, `invoicing.invoicing_venues`.`transaction_fee` AS `transaction_fee`, `invoicing.invoicing_venues`.`transaction_net` AS `transaction_net`, `invoicing.invoicing_venues`.`transaction_created_local` AS `transaction_created_local`, `invoicing.invoicing_venues`.`transaction_currency` AS `transaction_currency`, `invoicing.invoicing_venues`.`transaction_description` AS `transaction_description`, `invoicing.invoicing_venues`.`transaction_reporting_category` AS `transaction_reporting_category`, `invoicing.invoicing_venues`.`transaction_source` AS `transaction_source`, `invoicing.invoicing_venues`.`transaction_status` AS `transaction_status`, `invoicing.invoicing_venues`.`transaction_type` AS `transaction_type`, `invoicing.invoicing_venues`.`transaction_fee_ht` AS `transaction_fee_ht`, `invoicing.invoicing_venues`.`transaction_fee_tva` AS `transaction_fee_tva`, `invoicing.invoicing_venues`.`country_code` AS `country_code`, `invoicing.invoicing_venues`.`venue_region` AS `venue_region`, `invoicing.invoicing_venues`.`complete_name` AS `complete_name`, `invoicing.invoicing_venues`.`siret` AS `siret`, `invoicing.invoicing_venues`.`rcs` AS `rcs`, `invoicing.invoicing_venues`.`billing_address` AS `billing_address`, `invoicing.invoicing_venues`.`city` AS `city`, `invoicing.invoicing_venues`.`postal_code` AS `postal_code`, `invoicing.invoicing_venues`.`tax_id` AS `tax_id`, `invoicing.invoicing_venues`.`phone_number` AS `phone_number`, `invoicing.invoicing_venues`.`description` AS `description`, `invoicing.invoicing_venues`.`charge_description` AS `charge_description`, `invoicing.invoicing_venues`.`email_address` AS `email_address`, `invoicing.invoicing_venues`.`billing_email` AS `billing_email`, `invoicing.invoicing_venues`.`card_origin` AS `card_origin`, `invoicing.invoicing_venues`.`is_amex` AS `is_amex`, `invoicing.invoicing_venues`.`venue_id` AS `venue_id`, `invoicing.invoicing_venues`.`payout_fee` AS `payout_fee`, `invoicing.invoicing_venues`.`ledger_status` AS `ledger_status` FROM `invoicing.invoicing_venues` WHERE (`invoicing.invoicing_venues`.`transaction_created_local` >= ? AND `invoicing.invoicing_venues`.`transaction_created_local` < ? AND `invoicing.invoicing_venues`.`country_code` = ?) LIMIT 1048575",
420
:parameters (#t "2023-01-31T00:00" #t "2023-03-02T00:00" "FR")}}],
419
:error_type :invalid-query,
418
:json_query
417
{:database 3,
416
:query
415
{:source-table 82,
414
:filter ["and" ["between" ["field" 653 nil] "2023-01-31" "2023-03-01"] ["=" ["field" 643 nil] "FR"]]},
413
:type "query",
412
:middleware {:process-viz-settings? true, :skip-results-metadata? true, :format-rows? false},
411
:async? true,
410
:viz-settings
409
{:table.pivot false,
408
:table.pivot_column "payout_currency",
407
:table.cell_column "payout_amount",
406
:table.column_formatting [],
405
:metabase.shared.models.visualization-settings/column-settings {},
404
:metabase.shared.models.visualization-settings/table-columns
403
[{:metabase.shared.models.visualization-settings/table-column-name "transaction_id",
402
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 641 nil],
401
:metabase.shared.models.visualization-settings/table-column-enabled true}
400
:metabase.shared.models.visualization-settings/table-column-enabled true}
399
{:metabase.shared.models.visualization-settings/table-column-name "stripe_account_id",
398
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 644 nil],
397
:metabase.shared.models.visualization-settings/table-column-enabled true}
396
{:metabase.shared.models.visualization-settings/table-column-name "payout_id",
395
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 642 nil],
394
:metabase.shared.models.visualization-settings/table-column-enabled true}
393
{:metabase.shared.models.visualization-settings/table-column-name "payout_created_local",
392
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 668 {:temporal-unit :default}],
391
:metabase.shared.models.visualization-settings/table-column-enabled true}
390
{:metabase.shared.models.visualization-settings/table-column-name "payout_balance_transaction",
389
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 635 nil],
388
:metabase.shared.models.visualization-settings/table-column-enabled true}
387
{:metabase.shared.models.visualization-settings/table-column-name "payout_arrival_date",
386
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 645 {:temporal-unit :default}],
385
:metabase.shared.models.visualization-settings/table-column-enabled true}
384
{:metabase.shared.models.visualization-settings/table-column-name "payout_arrival_date_format",
383
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 647 nil],
382
:metabase.shared.models.visualization-settings/table-column-enabled true}
381
{:metabase.shared.models.visualization-settings/table-column-name "payout_currency",
380
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 639 nil],
379
:metabase.shared.models.visualization-settings/table-column-enabled true}
378
{:metabase.shared.models.visualization-settings/table-column-name "payout_amount",
377
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 632 nil],
376
:metabase.shared.models.visualization-settings/table-column-enabled true}
375
{:metabase.shared.models.visualization-settings/table-column-name "payout_destination",
374
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 669 nil],
373
:metabase.shared.models.visualization-settings/table-column-enabled true}
372
{:metabase.shared.models.visualization-settings/table-column-name "payout_statement_descriptor",
371
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 659 nil],
370
:metabase.shared.models.visualization-settings/table-column-enabled true}
369
{:metabase.shared.models.visualization-settings/table-column-name "payout_status",
368
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 656 nil],
367
:metabase.shared.models.visualization-settings/table-column-enabled true}
366
{:metabase.shared.models.visualization-settings/table-column-name "payout_failure_code",
365
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 655 nil],
364
:metabase.shared.models.visualization-settings/table-column-enabled true}
363
{:metabase.shared.models.visualization-settings/table-column-name "transaction_amount",
362
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 662 nil],
361
:metabase.shared.models.visualization-settings/table-column-enabled true}
360
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee",
359
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 651 nil],
358
:metabase.shared.models.visualization-settings/table-column-enabled true}
357
{:metabase.shared.models.visualization-settings/table-column-name "transaction_net",
356
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 657 nil],
355
:metabase.shared.models.visualization-settings/table-column-enabled true}
354
{:metabase.shared.models.visualization-settings/table-column-name "transaction_created_local",
353
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 653 {:temporal-unit :default}],
352
:metabase.shared.models.visualization-settings/table-column-enabled true}
351
{:metabase.shared.models.visualization-settings/table-column-name "transaction_currency",
350
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 652 nil],
349
:metabase.shared.models.visualization-settings/table-column-enabled true}
348
{:metabase.shared.models.visualization-settings/table-column-name "transaction_description",
347
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 630 nil],
346
:metabase.shared.models.visualization-settings/table-column-enabled true}
345
{:metabase.shared.models.visualization-settings/table-column-name "transaction_reporting_category",
344
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 640 nil],
343
:metabase.shared.models.visualization-settings/table-column-enabled true}
342
{:metabase.shared.models.visualization-settings/table-column-name "transaction_source",
341
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 637 nil],
340
:metabase.shared.models.visualization-settings/table-column-enabled true}
339
{:metabase.shared.models.visualization-settings/table-column-name "transaction_status",
338
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 631 nil],
337
:metabase.shared.models.visualization-settings/table-column-enabled true}
336
{:metabase.shared.models.visualization-settings/table-column-name "transaction_type",
335
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 660 nil],
334
:metabase.shared.models.visualization-settings/table-column-enabled true}
333
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee_ht",
332
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 650 nil],
331
:metabase.shared.models.visualization-settings/table-column-enabled true}
330
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee_tva",
329
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 663 nil],
328
:metabase.shared.models.visualization-settings/table-column-enabled true}
327
{:metabase.shared.models.visualization-settings/table-column-name "country_code",
326
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 643 nil],
325
:metabase.shared.models.visualization-settings/table-column-enabled true}
324
{:metabase.shared.models.visualization-settings/table-column-name "venue_region",
323
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 649 nil],
322
:metabase.shared.models.visualization-settings/table-column-enabled true}
321
{:metabase.shared.models.visualization-settings/table-column-name "complete_name",
320
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 634 nil],
319
:metabase.shared.models.visualization-settings/table-column-enabled true}
318
{:metabase.shared.models.visualization-settings/table-column-name "siret",
317
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 629 nil],
316
:metabase.shared.models.visualization-settings/table-column-enabled true}
315
{:metabase.shared.models.visualization-settings/table-column-name "rcs",
314
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 648 nil],
313
:metabase.shared.models.visualization-settings/table-column-enabled true}
312
{:metabase.shared.models.visualization-settings/table-column-name "billing_address",
311
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 638 nil],
310
:metabase.shared.models.visualization-settings/table-column-enabled true}
309
{:metabase.shared.models.visualization-settings/table-column-name "city",
308
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 646 nil],
307
:metabase.shared.models.visualization-settings/table-column-enabled true}
306
{:metabase.shared.models.visualization-settings/table-column-name "postal_code",
305
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 633 nil],
304
:metabase.shared.models.visualization-settings/table-column-enabled true}
303
{:metabase.shared.models.visualization-settings/table-column-name "tax_id",
302
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 661 nil],
301
:metabase.shared.models.visualization-settings/table-column-enabled true}
300
:metabase.shared.models.visualization-settings/table-column-enabled true}
299
{:metabase.shared.models.visualization-settings/table-column-name "phone_number",
298
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 658 nil],
297
:metabase.shared.models.visualization-settings/table-column-enabled true}
296
{:metabase.shared.models.visualization-settings/table-column-name "description",
295
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 654 nil],
294
:metabase.shared.models.visualization-settings/table-column-enabled true}
293
{:metabase.shared.models.visualization-settings/table-column-name "charge_description",
292
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 671 nil],
291
:metabase.shared.models.visualization-settings/table-column-enabled true}
290
{:metabase.shared.models.visualization-settings/table-column-name "email_address",
289
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 664 nil],
288
:metabase.shared.models.visualization-settings/table-column-enabled true}
287
{:metabase.shared.models.visualization-settings/table-column-name "billing_email",
286
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 672 nil],
285
:metabase.shared.models.visualization-settings/table-column-enabled true}
284
{:metabase.shared.models.visualization-settings/table-column-name "card_origin",
283
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 666 nil],
282
:metabase.shared.models.visualization-settings/table-column-enabled true}
281
{:metabase.shared.models.visualization-settings/table-column-name "is_amex",
280
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 636 nil],
279
:metabase.shared.models.visualization-settings/table-column-enabled true}
278
{:metabase.shared.models.visualization-settings/table-column-name "venue_id",
277
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 665 nil],
276
:metabase.shared.models.visualization-settings/table-column-enabled true}
275
{:metabase.shared.models.visualization-settings/table-column-name "payout_fee",
274
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 667 nil],
273
:metabase.shared.models.visualization-settings/table-column-enabled true}
272
{:metabase.shared.models.visualization-settings/table-column-name "ledger_status",
271
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 670 nil],
270
:metabase.shared.models.visualization-settings/table-column-enabled true}]}},
269
:native
268
{:query
267
"SELECT XXXXX LIMIT 1048575",
266
:params (#t "2023-01-31T00:00" #t "2023-03-02T00:00" "FR"),
265
:table-name "XXXXX",
264
:mbql? true},
263
:status :failed,
262
:class java.util.concurrent.CancellationException,
261
:stacktrace
260
["java.base/java.util.concurrent.FutureTask.report(Unknown Source)"
259
"java.base/java.util.concurrent.FutureTask.get(Unknown Source)"
258
"clojure.core$deref_future.invokeStatic(core.clj:2317)"
257
"clojure.core$future_call$reify__8544.deref(core.clj:7041)"
256
"clojure.core$deref.invokeStatic(core.clj:2337)"
255
"clojure.core$deref.invoke(core.clj:2323)"
254
"--> driver.bigquery_cloud_sdk$execute_bigquery.invokeStatic(bigquery_cloud_sdk.clj:181)"
253
"driver.bigquery_cloud_sdk$execute_bigquery.invoke(bigquery_cloud_sdk.clj:177)"
252
"driver.bigquery_cloud_sdk$execute_bigquery_on_db.invokeStatic(bigquery_cloud_sdk.clj:219)"
251
"driver.bigquery_cloud_sdk$execute_bigquery_on_db.invoke(bigquery_cloud_sdk.clj:217)"
250
"driver.bigquery_cloud_sdk$process_native_STAR_$thunk__84260.invoke(bigquery_cloud_sdk.clj:279)"
249
"driver.bigquery_cloud_sdk$process_native_STAR_.invokeStatic(bigquery_cloud_sdk.clj:291)"
248
"driver.bigquery_cloud_sdk$process_native_STAR_.invoke(bigquery_cloud_sdk.clj:271)"
247
"driver.bigquery_cloud_sdk$fn__84266.invokeStatic(bigquery_cloud_sdk.clj:307)"
246
"driver.bigquery_cloud_sdk$fn__84266.invoke(bigquery_cloud_sdk.clj:299)"
245
"query_processor.context$executef.invokeStatic(context.clj:59)"
244
"query_processor.context$executef.invoke(context.clj:48)"
243
"query_processor.context.default$default_runf.invokeStatic(default.clj:67)"
242
"query_processor.context.default$default_runf.invoke(default.clj:65)"
241
"query_processor.context$runf.invokeStatic(context.clj:45)"
240
"query_processor.context$runf.invoke(context.clj:39)"
239
"query_processor.reducible$identity_qp.invokeStatic(reducible.clj:12)"
238
"query_processor.reducible$identity_qp.invoke(reducible.clj:9)"
237
"query_processor.middleware.cache$maybe_return_cached_results$maybe_return_cached_results_STAR___53600.invoke(cache.clj:220)"
236
"query_processor.middleware.permissions$check_query_permissions$fn__49259.invoke(permissions.clj:109)"
235
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__52545.invoke(mbql_to_native.clj:23)"
234
"query_processor$fn__55076$combined_post_process__55081$combined_post_process_STAR___55082.invoke(query_processor.clj:212)"
233
"query_processor$fn__55076$combined_pre_process__55077$combined_pre_process_STAR___55078.invoke(query_processor.clj:209)"
232
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__53521$fn__53526.invoke(resolve_database_and_driver.clj:35)"
231
"driver$do_with_driver.invokeStatic(driver.clj:76)"
230
"driver$do_with_driver.invoke(driver.clj:72)"
229
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__53521.invoke(resolve_database_and_driver.clj:34)"
228
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__49525.invoke(fetch_source_query.clj:314)"
227
"query_processor.middleware.store$initialize_store$fn__49715$fn__49716.invoke(store.clj:11)"
226
"query_processor.store$do_with_store.invokeStatic(store.clj:45)"
225
"query_processor.store$do_with_store.invoke(store.clj:39)"
224
"query_processor.middleware.store$initialize_store$fn__49715.invoke(store.clj:10)"
223
"query_processor.middleware.normalize_query$normalize$fn__53793.invoke(normalize_query.clj:22)"
222
"query_processor.middleware.constraints$add_default_userland_constraints$fn__50803.invoke(constraints.clj:53)"
221
"query_processor.middleware.process_userland_query$process_userland_query$fn__53732.invoke(process_userland_query.clj:145)"
220
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__54104.invoke(catch_exceptions.clj:167)"
219
"query_processor.reducible$async_qp$qp_STAR___45514$thunk__45516.invoke(reducible.clj:100)"
218
"query_processor.reducible$async_qp$qp_STAR___45514$fn__45518.invoke(reducible.clj:105)"],
217
:card_id nil,
216
:context :xlsx-download,
215
:error nil,
214
:row_count 0,
213
:running_time 0,
212
:preprocessed
211
{:database 3,
210
:query
209
{:source-table 82,
208
:filter
207
[:and
206
[:>= [:field 653 {:temporal-unit :default}] [:absolute-datetime #t "2023-01-31T00:00Z[GMT]" :default]]
205
[:< [:field 653 {:temporal-unit :default}] [:absolute-datetime #t "2023-03-02T00:00Z[GMT]" :default]]
204
[:=
203
[:field 643 nil]
202
[:value
201
"FR"
200
"FR"
199
{:base_type :type/Text,
198
:effective_type :type/Text,
197
:coercion_strategy nil,
196
:semantic_type :type/Country,
195
:database_type "STRING",
194
:name "country_code"}]]],
193
:fields
192
[[:field 641 nil]
191
[:field 644 nil]
190
[:field 642 nil]
189
[:field 668 {:temporal-unit :default}]
188
[:field 635 nil]
187
[:field 645 {:temporal-unit :default}]
186
[:field 647 nil]
185
[:field 639 nil]
184
[:field 632 nil]
183
[:field 669 nil]
182
[:field 659 nil]
181
[:field 656 nil]
180
[:field 655 nil]
179
[:field 662 nil]
178
[:field 651 nil]
177
[:field 657 nil]
176
[:field 653 {:temporal-unit :default}]
175
[:field 652 nil]
174
[:field 630 nil]
173
[:field 640 nil]
172
[:field 637 nil]
171
[:field 631 nil]
170
[:field 660 nil]
169
[:field 650 nil]
168
[:field 663 nil]
167
[:field 643 nil]
166
[:field 649 nil]
165
[:field 634 nil]
164
[:field 629 nil]
163
[:field 648 nil]
162
[:field 638 nil]
161
[:field 646 nil]
160
[:field 633 nil]
159
[:field 661 nil]
158
[:field 658 nil]
157
[:field 654 nil]
156
[:field 671 nil]
155
[:field 664 nil]
154
[:field 672 nil]
153
[:field 666 nil]
152
[:field 636 nil]
151
[:field 665 nil]
150
[:field 667 nil]
149
[:field 670 nil]],
148
:limit 1048575,
147
:metabase.query-processor.middleware.limit/original-limit nil},
146
:type :query,
145
:middleware {:process-viz-settings? true, :skip-results-metadata? true, :format-rows? false},
144
:async? true,
143
:viz-settings
142
{:table.pivot false,
141
:table.pivot_column "payout_currency",
140
:table.cell_column "payout_amount",
139
:table.column_formatting [],
138
:metabase.shared.models.visualization-settings/column-settings {},
137
:metabase.shared.models.visualization-settings/table-columns
136
[{:metabase.shared.models.visualization-settings/table-column-name "transaction_id",
135
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 641 nil],
134
:metabase.shared.models.visualization-settings/table-column-enabled true}
133
{:metabase.shared.models.visualization-settings/table-column-name "stripe_account_id",
132
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 644 nil],
131
:metabase.shared.models.visualization-settings/table-column-enabled true}
130
{:metabase.shared.models.visualization-settings/table-column-name "payout_id",
129
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 642 nil],
128
:metabase.shared.models.visualization-settings/table-column-enabled true}
127
{:metabase.shared.models.visualization-settings/table-column-name "payout_created_local",
126
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 668 {:temporal-unit :default}],
125
:metabase.shared.models.visualization-settings/table-column-enabled true}
124
{:metabase.shared.models.visualization-settings/table-column-name "payout_balance_transaction",
123
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 635 nil],
122
:metabase.shared.models.visualization-settings/table-column-enabled true}
121
{:metabase.shared.models.visualization-settings/table-column-name "payout_arrival_date",
120
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 645 {:temporal-unit :default}],
119
:metabase.shared.models.visualization-settings/table-column-enabled true}
118
{:metabase.shared.models.visualization-settings/table-column-name "payout_arrival_date_format",
117
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 647 nil],
116
:metabase.shared.models.visualization-settings/table-column-enabled true}
115
{:metabase.shared.models.visualization-settings/table-column-name "payout_currency",
114
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 639 nil],
113
:metabase.shared.models.visualization-settings/table-column-enabled true}
112
{:metabase.shared.models.visualization-settings/table-column-name "payout_amount",
111
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 632 nil],
110
:metabase.shared.models.visualization-settings/table-column-enabled true}
109
{:metabase.shared.models.visualization-settings/table-column-name "payout_destination",
108
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 669 nil],
107
:metabase.shared.models.visualization-settings/table-column-enabled true}
106
{:metabase.shared.models.visualization-settings/table-column-name "payout_statement_descriptor",
105
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 659 nil],
104
:metabase.shared.models.visualization-settings/table-column-enabled true}
103
{:metabase.shared.models.visualization-settings/table-column-name "payout_status",
102
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 656 nil],
101
:metabase.shared.models.visualization-settings/table-column-enabled true}
100
:metabase.shared.models.visualization-settings/table-column-enabled true}
99
{:metabase.shared.models.visualization-settings/table-column-name "payout_failure_code",
98
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 655 nil],
97
:metabase.shared.models.visualization-settings/table-column-enabled true}
96
{:metabase.shared.models.visualization-settings/table-column-name "transaction_amount",
95
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 662 nil],
94
:metabase.shared.models.visualization-settings/table-column-enabled true}
93
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee",
92
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 651 nil],
91
:metabase.shared.models.visualization-settings/table-column-enabled true}
90
{:metabase.shared.models.visualization-settings/table-column-name "transaction_net",
89
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 657 nil],
88
:metabase.shared.models.visualization-settings/table-column-enabled true}
87
{:metabase.shared.models.visualization-settings/table-column-name "transaction_created_local",
86
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 653 {:temporal-unit :default}],
85
:metabase.shared.models.visualization-settings/table-column-enabled true}
84
{:metabase.shared.models.visualization-settings/table-column-name "transaction_currency",
83
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 652 nil],
82
:metabase.shared.models.visualization-settings/table-column-enabled true}
81
{:metabase.shared.models.visualization-settings/table-column-name "transaction_description",
80
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 630 nil],
79
:metabase.shared.models.visualization-settings/table-column-enabled true}
78
{:metabase.shared.models.visualization-settings/table-column-name "transaction_reporting_category",
77
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 640 nil],
76
:metabase.shared.models.visualization-settings/table-column-enabled true}
75
{:metabase.shared.models.visualization-settings/table-column-name "transaction_source",
74
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 637 nil],
73
:metabase.shared.models.visualization-settings/table-column-enabled true}
72
{:metabase.shared.models.visualization-settings/table-column-name "transaction_status",
71
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 631 nil],
70
:metabase.shared.models.visualization-settings/table-column-enabled true}
69
{:metabase.shared.models.visualization-settings/table-column-name "transaction_type",
68
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 660 nil],
67
:metabase.shared.models.visualization-settings/table-column-enabled true}
66
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee_ht",
65
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 650 nil],
64
:metabase.shared.models.visualization-settings/table-column-enabled true}
63
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee_tva",
62
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 663 nil],
61
:metabase.shared.models.visualization-settings/table-column-enabled true}
60
{:metabase.shared.models.visualization-settings/table-column-name "country_code",
59
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 643 nil],
58
:metabase.shared.models.visualization-settings/table-column-enabled true}
57
{:metabase.shared.models.visualization-settings/table-column-name "venue_region",
56
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 649 nil],
55
:metabase.shared.models.visualization-settings/table-column-enabled true}
54
{:metabase.shared.models.visualization-settings/table-column-name "complete_name",
53
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 634 nil],
52
:metabase.shared.models.visualization-settings/table-column-enabled true}
51
{:metabase.shared.models.visualization-settings/table-column-name "siret",
50
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 629 nil],
49
:metabase.shared.models.visualization-settings/table-column-enabled true}
48
{:metabase.shared.models.visualization-settings/table-column-name "rcs",
47
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 648 nil],
46
:metabase.shared.models.visualization-settings/table-column-enabled true}
45
{:metabase.shared.models.visualization-settings/table-column-name "billing_address",
44
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 638 nil],
43
:metabase.shared.models.visualization-settings/table-column-enabled true}
42
{:metabase.shared.models.visualization-settings/table-column-name "city",
41
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 646 nil],
40
:metabase.shared.models.visualization-settings/table-column-enabled true}
39
{:metabase.shared.models.visualization-settings/table-column-name "postal_code",
38
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 633 nil],
37
:metabase.shared.models.visualization-settings/table-column-enabled true}
36
{:metabase.shared.models.visualization-settings/table-column-name "tax_id",
35
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 661 nil],
34
:metabase.shared.models.visualization-settings/table-column-enabled true}
33
{:metabase.shared.models.visualization-settings/table-column-name "phone_number",
32
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 658 nil],
31
:metabase.shared.models.visualization-settings/table-column-enabled true}
30
{:metabase.shared.models.visualization-settings/table-column-name "description",
29
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 654 nil],
28
:metabase.shared.models.visualization-settings/table-column-enabled true}
27
{:metabase.shared.models.visualization-settings/table-column-name "charge_description",
26
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 671 nil],
25
:metabase.shared.models.visualization-settings/table-column-enabled true}
24
{:metabase.shared.models.visualization-settings/table-column-name "email_address",
23
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 664 nil],
22
:metabase.shared.models.visualization-settings/table-column-enabled true}
21
{:metabase.shared.models.visualization-settings/table-column-name "billing_email",
20
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 672 nil],
19
:metabase.shared.models.visualization-settings/table-column-enabled true}
18
{:metabase.shared.models.visualization-settings/table-column-name "card_origin",
17
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 666 nil],
16
:metabase.shared.models.visualization-settings/table-column-enabled true}
15
{:metabase.shared.models.visualization-settings/table-column-name "is_amex",
14
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 636 nil],
13
:metabase.shared.models.visualization-settings/table-column-enabled true}
12
{:metabase.shared.models.visualization-settings/table-column-name "venue_id",
11
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 665 nil],
10
:metabase.shared.models.visualization-settings/table-column-enabled true}
9
{:metabase.shared.models.visualization-settings/table-column-name "payout_fee",
8
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 667 nil],
7
:metabase.shared.models.visualization-settings/table-column-enabled true}
6
{:metabase.shared.models.visualization-settings/table-column-name "ledger_status",
5
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 670 nil],
4
:metabase.shared.models.visualization-settings/table-column-enabled true}]},
3
:info {:executed-by 3, :context :xlsx-download}},
2
:data {:rows [], :cols []}}
```
</details>
Downloaded xslx file content:
```
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>
```
**To Reproduce**
Steps to reproduce the behavior (if you can reproduce the bug using the Sample Database, we will find the issue faster):
1. Launch a question which carries out a big select statement on BigQuery
2. Click on 'Download full results > .xslx'
3. The downloaded results can't be opened
**Expected behavior**
I would expect the downloaded xslx to be healthy whatever the select statement size
**Screenshots/videos**
The downloading button used:

The error when opening the file:
<img width="306" alt="Capture d’écran 2023-03-10 à 11 50 54" src="https://user-images.githubusercontent.com/22528531/224297740-ee15d8bb-028d-4bb4-9557-9ddd479d373e.png">
**Information about your Metabase Installation:**
- Your browser and the version:
"browser-info": {
"language": "fr-FR",
"platform": "MacIntel",
"userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36",
"vendor": "Google Inc."
},
- Your operating system:
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.17+8",
"java.vendor": "Eclipse Adoptium",
"java.vendor.url": "https://adoptium.net/",
"java.version": "11.0.17",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.17+8",
"os.name": "Linux",
"os.version": "5.10.147+",
"user.language": "en",
"user.timezone": "GMT"
},
- Your databases: BigQuery
- Metabase version: 0.45.3
- Metabase hosting environment: Google Kubernetes Engine
- Metabase internal database: Postgres
**Severity**
This issue is not blocking our analytics but is a serious blocker for some critical operations carried out by our business teams
**Additional context**
- Reverting to v0.45.1 solved the issue so the problem was introduced recently.
- The BQ query itself is a simple select statement with a where clause, which takes around 4 seconds
- The downloading itself takes several minutes
- Digging in BQ, it seems we can't download an xslx file from there, and downloading CSV files is limited to 10Mb
- Our BQ query results seem to be around 250 Mb
- Improving the BigQuery table partitioning to lighten pagination calls launched by Metabase during the downloading did not work
Thanks in advance for your help! 🙏
EDIT 2023-03-13: Logs section
|
1.0
|
Big xslx file downloaded from BigQuery are corrupted in v0.45.3 and were not in v0.45.1 - **Describe the bug**
Big xslx file downloaded from BigQuery are corrupted in v0.45.3, and were okay in v0.45.1 (see screenshots)
**Logs**
Please include JavaScript console logs (open the browser dev tools and check the "console" tab) as well as the server logs (Metabase logs that you can get from Settings -> Admin -> Troubleshooting -> Logs) around the time this bug occurred. For information about how to get these, consult our [bug troubleshooting guide](https://metabase.com/docs/latest/troubleshooting-guide/bugs.html)
JS console logs:
<details>
```
DownloadButton.jsx:60 POST https://XXX/api/dataset/xlsx 502
(anonymous) @ DownloadButton.jsx:60
u @ runtime.js:63
(anonymous) @ runtime.js:293
(anonymous) @ runtime.js:118
b @ DownloadButton.styled.tsx:4
a @ DownloadButton.styled.tsx:4
(anonymous) @ DownloadButton.styled.tsx:4
(anonymous) @ DownloadButton.styled.tsx:4
(anonymous) @ DownloadButton.jsx:36
onSubmit @ DownloadButton.jsx:96
c @ react-dom.production.min.js:14
f @ react-dom.production.min.js:14
(anonymous) @ react-dom.production.min.js:14
M @ react-dom.production.min.js:15
ct @ react-dom.production.min.js:52
at @ react-dom.production.min.js:51
lt @ react-dom.production.min.js:52
bt @ react-dom.production.min.js:56
N @ react-dom.production.min.js:287
F @ react-dom.production.min.js:19
en @ react-dom.production.min.js:70
Qt @ react-dom.production.min.js:69
t.unstable_runWithPriority @ scheduler.production.min.js:19
Yi @ react-dom.production.min.js:122
D @ react-dom.production.min.js:287
Zt @ react-dom.production.min.js:68
```
</details>
Server logs:
- Not sure it's related, infinity of logs as:
<details>
```
472
2023-03-13 14:37:16,836 ERROR middleware.catch-exceptions :: Error processing query: Error running query
471
{:database_id 3,
470
:started_at #t "2023-03-13T14:27:10.463072Z[GMT]",
469
:via
468
[{:status :failed,
467
:class clojure.lang.ExceptionInfo,
466
:error "Error executing query: null",
465
:stacktrace
464
["--> driver.bigquery_cloud_sdk$throw_invalid_query.invokeStatic(bigquery_cloud_sdk.clj:173)"
463
"driver.bigquery_cloud_sdk$throw_invalid_query.invoke(bigquery_cloud_sdk.clj:172)"
462
"driver.bigquery_cloud_sdk$execute_bigquery.invokeStatic(bigquery_cloud_sdk.clj:215)"
461
"driver.bigquery_cloud_sdk$execute_bigquery.invoke(bigquery_cloud_sdk.clj:177)"
460
"driver.bigquery_cloud_sdk$execute_bigquery_on_db.invokeStatic(bigquery_cloud_sdk.clj:219)"
459
"driver.bigquery_cloud_sdk$execute_bigquery_on_db.invoke(bigquery_cloud_sdk.clj:217)"
458
"driver.bigquery_cloud_sdk$process_native_STAR_$thunk__84260.invoke(bigquery_cloud_sdk.clj:279)"
457
"driver.bigquery_cloud_sdk$process_native_STAR_.invokeStatic(bigquery_cloud_sdk.clj:291)"
456
"driver.bigquery_cloud_sdk$process_native_STAR_.invoke(bigquery_cloud_sdk.clj:271)"
455
"driver.bigquery_cloud_sdk$fn__84266.invokeStatic(bigquery_cloud_sdk.clj:307)"
454
"driver.bigquery_cloud_sdk$fn__84266.invoke(bigquery_cloud_sdk.clj:299)"
453
"query_processor.context$executef.invokeStatic(context.clj:59)"
452
"query_processor.context$executef.invoke(context.clj:48)"
451
"query_processor.context.default$default_runf.invokeStatic(default.clj:67)"
450
"query_processor.context.default$default_runf.invoke(default.clj:65)"
449
"query_processor.context$runf.invokeStatic(context.clj:45)"
448
"query_processor.context$runf.invoke(context.clj:39)"
447
"query_processor.reducible$identity_qp.invokeStatic(reducible.clj:12)"
446
"query_processor.reducible$identity_qp.invoke(reducible.clj:9)"
445
"query_processor.middleware.cache$maybe_return_cached_results$maybe_return_cached_results_STAR___53600.invoke(cache.clj:220)"
444
"query_processor.middleware.permissions$check_query_permissions$fn__49259.invoke(permissions.clj:109)"
443
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__52545.invoke(mbql_to_native.clj:23)"
442
"query_processor$fn__55076$combined_post_process__55081$combined_post_process_STAR___55082.invoke(query_processor.clj:212)"
441
"query_processor$fn__55076$combined_pre_process__55077$combined_pre_process_STAR___55078.invoke(query_processor.clj:209)"
440
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__53521$fn__53526.invoke(resolve_database_and_driver.clj:35)"
439
"driver$do_with_driver.invokeStatic(driver.clj:76)"
438
"driver$do_with_driver.invoke(driver.clj:72)"
437
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__53521.invoke(resolve_database_and_driver.clj:34)"
436
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__49525.invoke(fetch_source_query.clj:314)"
435
"query_processor.middleware.store$initialize_store$fn__49715$fn__49716.invoke(store.clj:11)"
434
"query_processor.store$do_with_store.invokeStatic(store.clj:45)"
433
"query_processor.store$do_with_store.invoke(store.clj:39)"
432
"query_processor.middleware.store$initialize_store$fn__49715.invoke(store.clj:10)"
431
"query_processor.middleware.normalize_query$normalize$fn__53793.invoke(normalize_query.clj:22)"
430
"query_processor.middleware.constraints$add_default_userland_constraints$fn__50803.invoke(constraints.clj:53)"
429
"query_processor.middleware.process_userland_query$process_userland_query$fn__53732.invoke(process_userland_query.clj:145)"
428
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__54104.invoke(catch_exceptions.clj:167)"
427
"query_processor.reducible$async_qp$qp_STAR___45514$thunk__45516.invoke(reducible.clj:100)"
426
"query_processor.reducible$async_qp$qp_STAR___45514$fn__45518.invoke(reducible.clj:105)"],
425
:error_type :invalid-query,
424
:ex-data
423
{:type :invalid-query,
422
:sql
421
"-- Metabase:: userID: 3 queryType: MBQL queryHash: 49de1635bad2646028954eb16d8b2e9e8c540a52a07ed282b073489b643a3f36\nSELECT `invoicing.invoicing_venues`.`transaction_id` AS `transaction_id`, `invoicing.invoicing_venues`.`stripe_account_id` AS `stripe_account_id`, `invoicing.invoicing_venues`.`payout_id` AS `payout_id`, `invoicing.invoicing_venues`.`payout_created_local` AS `payout_created_local`, `invoicing.invoicing_venues`.`payout_balance_transaction` AS `payout_balance_transaction`, `invoicing.invoicing_venues`.`payout_arrival_date` AS `payout_arrival_date`, `invoicing.invoicing_venues`.`payout_arrival_date_format` AS `payout_arrival_date_format`, `invoicing.invoicing_venues`.`payout_currency` AS `payout_currency`, `invoicing.invoicing_venues`.`payout_amount` AS `payout_amount`, `invoicing.invoicing_venues`.`payout_destination` AS `payout_destination`, `invoicing.invoicing_venues`.`payout_statement_descriptor` AS `payout_statement_descriptor`, `invoicing.invoicing_venues`.`payout_status` AS `payout_status`, `invoicing.invoicing_venues`.`payout_failure_code` AS `payout_failure_code`, `invoicing.invoicing_venues`.`transaction_amount` AS `transaction_amount`, `invoicing.invoicing_venues`.`transaction_fee` AS `transaction_fee`, `invoicing.invoicing_venues`.`transaction_net` AS `transaction_net`, `invoicing.invoicing_venues`.`transaction_created_local` AS `transaction_created_local`, `invoicing.invoicing_venues`.`transaction_currency` AS `transaction_currency`, `invoicing.invoicing_venues`.`transaction_description` AS `transaction_description`, `invoicing.invoicing_venues`.`transaction_reporting_category` AS `transaction_reporting_category`, `invoicing.invoicing_venues`.`transaction_source` AS `transaction_source`, `invoicing.invoicing_venues`.`transaction_status` AS `transaction_status`, `invoicing.invoicing_venues`.`transaction_type` AS `transaction_type`, `invoicing.invoicing_venues`.`transaction_fee_ht` AS `transaction_fee_ht`, `invoicing.invoicing_venues`.`transaction_fee_tva` AS `transaction_fee_tva`, `invoicing.invoicing_venues`.`country_code` AS `country_code`, `invoicing.invoicing_venues`.`venue_region` AS `venue_region`, `invoicing.invoicing_venues`.`complete_name` AS `complete_name`, `invoicing.invoicing_venues`.`siret` AS `siret`, `invoicing.invoicing_venues`.`rcs` AS `rcs`, `invoicing.invoicing_venues`.`billing_address` AS `billing_address`, `invoicing.invoicing_venues`.`city` AS `city`, `invoicing.invoicing_venues`.`postal_code` AS `postal_code`, `invoicing.invoicing_venues`.`tax_id` AS `tax_id`, `invoicing.invoicing_venues`.`phone_number` AS `phone_number`, `invoicing.invoicing_venues`.`description` AS `description`, `invoicing.invoicing_venues`.`charge_description` AS `charge_description`, `invoicing.invoicing_venues`.`email_address` AS `email_address`, `invoicing.invoicing_venues`.`billing_email` AS `billing_email`, `invoicing.invoicing_venues`.`card_origin` AS `card_origin`, `invoicing.invoicing_venues`.`is_amex` AS `is_amex`, `invoicing.invoicing_venues`.`venue_id` AS `venue_id`, `invoicing.invoicing_venues`.`payout_fee` AS `payout_fee`, `invoicing.invoicing_venues`.`ledger_status` AS `ledger_status` FROM `invoicing.invoicing_venues` WHERE (`invoicing.invoicing_venues`.`transaction_created_local` >= ? AND `invoicing.invoicing_venues`.`transaction_created_local` < ? AND `invoicing.invoicing_venues`.`country_code` = ?) LIMIT 1048575",
420
:parameters (#t "2023-01-31T00:00" #t "2023-03-02T00:00" "FR")}}],
419
:error_type :invalid-query,
418
:json_query
417
{:database 3,
416
:query
415
{:source-table 82,
414
:filter ["and" ["between" ["field" 653 nil] "2023-01-31" "2023-03-01"] ["=" ["field" 643 nil] "FR"]]},
413
:type "query",
412
:middleware {:process-viz-settings? true, :skip-results-metadata? true, :format-rows? false},
411
:async? true,
410
:viz-settings
409
{:table.pivot false,
408
:table.pivot_column "payout_currency",
407
:table.cell_column "payout_amount",
406
:table.column_formatting [],
405
:metabase.shared.models.visualization-settings/column-settings {},
404
:metabase.shared.models.visualization-settings/table-columns
403
[{:metabase.shared.models.visualization-settings/table-column-name "transaction_id",
402
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 641 nil],
401
:metabase.shared.models.visualization-settings/table-column-enabled true}
400
:metabase.shared.models.visualization-settings/table-column-enabled true}
399
{:metabase.shared.models.visualization-settings/table-column-name "stripe_account_id",
398
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 644 nil],
397
:metabase.shared.models.visualization-settings/table-column-enabled true}
396
{:metabase.shared.models.visualization-settings/table-column-name "payout_id",
395
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 642 nil],
394
:metabase.shared.models.visualization-settings/table-column-enabled true}
393
{:metabase.shared.models.visualization-settings/table-column-name "payout_created_local",
392
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 668 {:temporal-unit :default}],
391
:metabase.shared.models.visualization-settings/table-column-enabled true}
390
{:metabase.shared.models.visualization-settings/table-column-name "payout_balance_transaction",
389
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 635 nil],
388
:metabase.shared.models.visualization-settings/table-column-enabled true}
387
{:metabase.shared.models.visualization-settings/table-column-name "payout_arrival_date",
386
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 645 {:temporal-unit :default}],
385
:metabase.shared.models.visualization-settings/table-column-enabled true}
384
{:metabase.shared.models.visualization-settings/table-column-name "payout_arrival_date_format",
383
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 647 nil],
382
:metabase.shared.models.visualization-settings/table-column-enabled true}
381
{:metabase.shared.models.visualization-settings/table-column-name "payout_currency",
380
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 639 nil],
379
:metabase.shared.models.visualization-settings/table-column-enabled true}
378
{:metabase.shared.models.visualization-settings/table-column-name "payout_amount",
377
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 632 nil],
376
:metabase.shared.models.visualization-settings/table-column-enabled true}
375
{:metabase.shared.models.visualization-settings/table-column-name "payout_destination",
374
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 669 nil],
373
:metabase.shared.models.visualization-settings/table-column-enabled true}
372
{:metabase.shared.models.visualization-settings/table-column-name "payout_statement_descriptor",
371
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 659 nil],
370
:metabase.shared.models.visualization-settings/table-column-enabled true}
369
{:metabase.shared.models.visualization-settings/table-column-name "payout_status",
368
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 656 nil],
367
:metabase.shared.models.visualization-settings/table-column-enabled true}
366
{:metabase.shared.models.visualization-settings/table-column-name "payout_failure_code",
365
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 655 nil],
364
:metabase.shared.models.visualization-settings/table-column-enabled true}
363
{:metabase.shared.models.visualization-settings/table-column-name "transaction_amount",
362
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 662 nil],
361
:metabase.shared.models.visualization-settings/table-column-enabled true}
360
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee",
359
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 651 nil],
358
:metabase.shared.models.visualization-settings/table-column-enabled true}
357
{:metabase.shared.models.visualization-settings/table-column-name "transaction_net",
356
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 657 nil],
355
:metabase.shared.models.visualization-settings/table-column-enabled true}
354
{:metabase.shared.models.visualization-settings/table-column-name "transaction_created_local",
353
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 653 {:temporal-unit :default}],
352
:metabase.shared.models.visualization-settings/table-column-enabled true}
351
{:metabase.shared.models.visualization-settings/table-column-name "transaction_currency",
350
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 652 nil],
349
:metabase.shared.models.visualization-settings/table-column-enabled true}
348
{:metabase.shared.models.visualization-settings/table-column-name "transaction_description",
347
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 630 nil],
346
:metabase.shared.models.visualization-settings/table-column-enabled true}
345
{:metabase.shared.models.visualization-settings/table-column-name "transaction_reporting_category",
344
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 640 nil],
343
:metabase.shared.models.visualization-settings/table-column-enabled true}
342
{:metabase.shared.models.visualization-settings/table-column-name "transaction_source",
341
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 637 nil],
340
:metabase.shared.models.visualization-settings/table-column-enabled true}
339
{:metabase.shared.models.visualization-settings/table-column-name "transaction_status",
338
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 631 nil],
337
:metabase.shared.models.visualization-settings/table-column-enabled true}
336
{:metabase.shared.models.visualization-settings/table-column-name "transaction_type",
335
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 660 nil],
334
:metabase.shared.models.visualization-settings/table-column-enabled true}
333
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee_ht",
332
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 650 nil],
331
:metabase.shared.models.visualization-settings/table-column-enabled true}
330
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee_tva",
329
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 663 nil],
328
:metabase.shared.models.visualization-settings/table-column-enabled true}
327
{:metabase.shared.models.visualization-settings/table-column-name "country_code",
326
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 643 nil],
325
:metabase.shared.models.visualization-settings/table-column-enabled true}
324
{:metabase.shared.models.visualization-settings/table-column-name "venue_region",
323
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 649 nil],
322
:metabase.shared.models.visualization-settings/table-column-enabled true}
321
{:metabase.shared.models.visualization-settings/table-column-name "complete_name",
320
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 634 nil],
319
:metabase.shared.models.visualization-settings/table-column-enabled true}
318
{:metabase.shared.models.visualization-settings/table-column-name "siret",
317
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 629 nil],
316
:metabase.shared.models.visualization-settings/table-column-enabled true}
315
{:metabase.shared.models.visualization-settings/table-column-name "rcs",
314
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 648 nil],
313
:metabase.shared.models.visualization-settings/table-column-enabled true}
312
{:metabase.shared.models.visualization-settings/table-column-name "billing_address",
311
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 638 nil],
310
:metabase.shared.models.visualization-settings/table-column-enabled true}
309
{:metabase.shared.models.visualization-settings/table-column-name "city",
308
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 646 nil],
307
:metabase.shared.models.visualization-settings/table-column-enabled true}
306
{:metabase.shared.models.visualization-settings/table-column-name "postal_code",
305
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 633 nil],
304
:metabase.shared.models.visualization-settings/table-column-enabled true}
303
{:metabase.shared.models.visualization-settings/table-column-name "tax_id",
302
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 661 nil],
301
:metabase.shared.models.visualization-settings/table-column-enabled true}
300
:metabase.shared.models.visualization-settings/table-column-enabled true}
299
{:metabase.shared.models.visualization-settings/table-column-name "phone_number",
298
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 658 nil],
297
:metabase.shared.models.visualization-settings/table-column-enabled true}
296
{:metabase.shared.models.visualization-settings/table-column-name "description",
295
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 654 nil],
294
:metabase.shared.models.visualization-settings/table-column-enabled true}
293
{:metabase.shared.models.visualization-settings/table-column-name "charge_description",
292
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 671 nil],
291
:metabase.shared.models.visualization-settings/table-column-enabled true}
290
{:metabase.shared.models.visualization-settings/table-column-name "email_address",
289
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 664 nil],
288
:metabase.shared.models.visualization-settings/table-column-enabled true}
287
{:metabase.shared.models.visualization-settings/table-column-name "billing_email",
286
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 672 nil],
285
:metabase.shared.models.visualization-settings/table-column-enabled true}
284
{:metabase.shared.models.visualization-settings/table-column-name "card_origin",
283
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 666 nil],
282
:metabase.shared.models.visualization-settings/table-column-enabled true}
281
{:metabase.shared.models.visualization-settings/table-column-name "is_amex",
280
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 636 nil],
279
:metabase.shared.models.visualization-settings/table-column-enabled true}
278
{:metabase.shared.models.visualization-settings/table-column-name "venue_id",
277
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 665 nil],
276
:metabase.shared.models.visualization-settings/table-column-enabled true}
275
{:metabase.shared.models.visualization-settings/table-column-name "payout_fee",
274
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 667 nil],
273
:metabase.shared.models.visualization-settings/table-column-enabled true}
272
{:metabase.shared.models.visualization-settings/table-column-name "ledger_status",
271
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 670 nil],
270
:metabase.shared.models.visualization-settings/table-column-enabled true}]}},
269
:native
268
{:query
267
"SELECT XXXXX LIMIT 1048575",
266
:params (#t "2023-01-31T00:00" #t "2023-03-02T00:00" "FR"),
265
:table-name "XXXXX",
264
:mbql? true},
263
:status :failed,
262
:class java.util.concurrent.CancellationException,
261
:stacktrace
260
["java.base/java.util.concurrent.FutureTask.report(Unknown Source)"
259
"java.base/java.util.concurrent.FutureTask.get(Unknown Source)"
258
"clojure.core$deref_future.invokeStatic(core.clj:2317)"
257
"clojure.core$future_call$reify__8544.deref(core.clj:7041)"
256
"clojure.core$deref.invokeStatic(core.clj:2337)"
255
"clojure.core$deref.invoke(core.clj:2323)"
254
"--> driver.bigquery_cloud_sdk$execute_bigquery.invokeStatic(bigquery_cloud_sdk.clj:181)"
253
"driver.bigquery_cloud_sdk$execute_bigquery.invoke(bigquery_cloud_sdk.clj:177)"
252
"driver.bigquery_cloud_sdk$execute_bigquery_on_db.invokeStatic(bigquery_cloud_sdk.clj:219)"
251
"driver.bigquery_cloud_sdk$execute_bigquery_on_db.invoke(bigquery_cloud_sdk.clj:217)"
250
"driver.bigquery_cloud_sdk$process_native_STAR_$thunk__84260.invoke(bigquery_cloud_sdk.clj:279)"
249
"driver.bigquery_cloud_sdk$process_native_STAR_.invokeStatic(bigquery_cloud_sdk.clj:291)"
248
"driver.bigquery_cloud_sdk$process_native_STAR_.invoke(bigquery_cloud_sdk.clj:271)"
247
"driver.bigquery_cloud_sdk$fn__84266.invokeStatic(bigquery_cloud_sdk.clj:307)"
246
"driver.bigquery_cloud_sdk$fn__84266.invoke(bigquery_cloud_sdk.clj:299)"
245
"query_processor.context$executef.invokeStatic(context.clj:59)"
244
"query_processor.context$executef.invoke(context.clj:48)"
243
"query_processor.context.default$default_runf.invokeStatic(default.clj:67)"
242
"query_processor.context.default$default_runf.invoke(default.clj:65)"
241
"query_processor.context$runf.invokeStatic(context.clj:45)"
240
"query_processor.context$runf.invoke(context.clj:39)"
239
"query_processor.reducible$identity_qp.invokeStatic(reducible.clj:12)"
238
"query_processor.reducible$identity_qp.invoke(reducible.clj:9)"
237
"query_processor.middleware.cache$maybe_return_cached_results$maybe_return_cached_results_STAR___53600.invoke(cache.clj:220)"
236
"query_processor.middleware.permissions$check_query_permissions$fn__49259.invoke(permissions.clj:109)"
235
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__52545.invoke(mbql_to_native.clj:23)"
234
"query_processor$fn__55076$combined_post_process__55081$combined_post_process_STAR___55082.invoke(query_processor.clj:212)"
233
"query_processor$fn__55076$combined_pre_process__55077$combined_pre_process_STAR___55078.invoke(query_processor.clj:209)"
232
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__53521$fn__53526.invoke(resolve_database_and_driver.clj:35)"
231
"driver$do_with_driver.invokeStatic(driver.clj:76)"
230
"driver$do_with_driver.invoke(driver.clj:72)"
229
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__53521.invoke(resolve_database_and_driver.clj:34)"
228
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__49525.invoke(fetch_source_query.clj:314)"
227
"query_processor.middleware.store$initialize_store$fn__49715$fn__49716.invoke(store.clj:11)"
226
"query_processor.store$do_with_store.invokeStatic(store.clj:45)"
225
"query_processor.store$do_with_store.invoke(store.clj:39)"
224
"query_processor.middleware.store$initialize_store$fn__49715.invoke(store.clj:10)"
223
"query_processor.middleware.normalize_query$normalize$fn__53793.invoke(normalize_query.clj:22)"
222
"query_processor.middleware.constraints$add_default_userland_constraints$fn__50803.invoke(constraints.clj:53)"
221
"query_processor.middleware.process_userland_query$process_userland_query$fn__53732.invoke(process_userland_query.clj:145)"
220
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__54104.invoke(catch_exceptions.clj:167)"
219
"query_processor.reducible$async_qp$qp_STAR___45514$thunk__45516.invoke(reducible.clj:100)"
218
"query_processor.reducible$async_qp$qp_STAR___45514$fn__45518.invoke(reducible.clj:105)"],
217
:card_id nil,
216
:context :xlsx-download,
215
:error nil,
214
:row_count 0,
213
:running_time 0,
212
:preprocessed
211
{:database 3,
210
:query
209
{:source-table 82,
208
:filter
207
[:and
206
[:>= [:field 653 {:temporal-unit :default}] [:absolute-datetime #t "2023-01-31T00:00Z[GMT]" :default]]
205
[:< [:field 653 {:temporal-unit :default}] [:absolute-datetime #t "2023-03-02T00:00Z[GMT]" :default]]
204
[:=
203
[:field 643 nil]
202
[:value
201
"FR"
200
"FR"
199
{:base_type :type/Text,
198
:effective_type :type/Text,
197
:coercion_strategy nil,
196
:semantic_type :type/Country,
195
:database_type "STRING",
194
:name "country_code"}]]],
193
:fields
192
[[:field 641 nil]
191
[:field 644 nil]
190
[:field 642 nil]
189
[:field 668 {:temporal-unit :default}]
188
[:field 635 nil]
187
[:field 645 {:temporal-unit :default}]
186
[:field 647 nil]
185
[:field 639 nil]
184
[:field 632 nil]
183
[:field 669 nil]
182
[:field 659 nil]
181
[:field 656 nil]
180
[:field 655 nil]
179
[:field 662 nil]
178
[:field 651 nil]
177
[:field 657 nil]
176
[:field 653 {:temporal-unit :default}]
175
[:field 652 nil]
174
[:field 630 nil]
173
[:field 640 nil]
172
[:field 637 nil]
171
[:field 631 nil]
170
[:field 660 nil]
169
[:field 650 nil]
168
[:field 663 nil]
167
[:field 643 nil]
166
[:field 649 nil]
165
[:field 634 nil]
164
[:field 629 nil]
163
[:field 648 nil]
162
[:field 638 nil]
161
[:field 646 nil]
160
[:field 633 nil]
159
[:field 661 nil]
158
[:field 658 nil]
157
[:field 654 nil]
156
[:field 671 nil]
155
[:field 664 nil]
154
[:field 672 nil]
153
[:field 666 nil]
152
[:field 636 nil]
151
[:field 665 nil]
150
[:field 667 nil]
149
[:field 670 nil]],
148
:limit 1048575,
147
:metabase.query-processor.middleware.limit/original-limit nil},
146
:type :query,
145
:middleware {:process-viz-settings? true, :skip-results-metadata? true, :format-rows? false},
144
:async? true,
143
:viz-settings
142
{:table.pivot false,
141
:table.pivot_column "payout_currency",
140
:table.cell_column "payout_amount",
139
:table.column_formatting [],
138
:metabase.shared.models.visualization-settings/column-settings {},
137
:metabase.shared.models.visualization-settings/table-columns
136
[{:metabase.shared.models.visualization-settings/table-column-name "transaction_id",
135
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 641 nil],
134
:metabase.shared.models.visualization-settings/table-column-enabled true}
133
{:metabase.shared.models.visualization-settings/table-column-name "stripe_account_id",
132
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 644 nil],
131
:metabase.shared.models.visualization-settings/table-column-enabled true}
130
{:metabase.shared.models.visualization-settings/table-column-name "payout_id",
129
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 642 nil],
128
:metabase.shared.models.visualization-settings/table-column-enabled true}
127
{:metabase.shared.models.visualization-settings/table-column-name "payout_created_local",
126
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 668 {:temporal-unit :default}],
125
:metabase.shared.models.visualization-settings/table-column-enabled true}
124
{:metabase.shared.models.visualization-settings/table-column-name "payout_balance_transaction",
123
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 635 nil],
122
:metabase.shared.models.visualization-settings/table-column-enabled true}
121
{:metabase.shared.models.visualization-settings/table-column-name "payout_arrival_date",
120
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 645 {:temporal-unit :default}],
119
:metabase.shared.models.visualization-settings/table-column-enabled true}
118
{:metabase.shared.models.visualization-settings/table-column-name "payout_arrival_date_format",
117
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 647 nil],
116
:metabase.shared.models.visualization-settings/table-column-enabled true}
115
{:metabase.shared.models.visualization-settings/table-column-name "payout_currency",
114
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 639 nil],
113
:metabase.shared.models.visualization-settings/table-column-enabled true}
112
{:metabase.shared.models.visualization-settings/table-column-name "payout_amount",
111
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 632 nil],
110
:metabase.shared.models.visualization-settings/table-column-enabled true}
109
{:metabase.shared.models.visualization-settings/table-column-name "payout_destination",
108
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 669 nil],
107
:metabase.shared.models.visualization-settings/table-column-enabled true}
106
{:metabase.shared.models.visualization-settings/table-column-name "payout_statement_descriptor",
105
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 659 nil],
104
:metabase.shared.models.visualization-settings/table-column-enabled true}
103
{:metabase.shared.models.visualization-settings/table-column-name "payout_status",
102
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 656 nil],
101
:metabase.shared.models.visualization-settings/table-column-enabled true}
100
:metabase.shared.models.visualization-settings/table-column-enabled true}
99
{:metabase.shared.models.visualization-settings/table-column-name "payout_failure_code",
98
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 655 nil],
97
:metabase.shared.models.visualization-settings/table-column-enabled true}
96
{:metabase.shared.models.visualization-settings/table-column-name "transaction_amount",
95
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 662 nil],
94
:metabase.shared.models.visualization-settings/table-column-enabled true}
93
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee",
92
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 651 nil],
91
:metabase.shared.models.visualization-settings/table-column-enabled true}
90
{:metabase.shared.models.visualization-settings/table-column-name "transaction_net",
89
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 657 nil],
88
:metabase.shared.models.visualization-settings/table-column-enabled true}
87
{:metabase.shared.models.visualization-settings/table-column-name "transaction_created_local",
86
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 653 {:temporal-unit :default}],
85
:metabase.shared.models.visualization-settings/table-column-enabled true}
84
{:metabase.shared.models.visualization-settings/table-column-name "transaction_currency",
83
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 652 nil],
82
:metabase.shared.models.visualization-settings/table-column-enabled true}
81
{:metabase.shared.models.visualization-settings/table-column-name "transaction_description",
80
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 630 nil],
79
:metabase.shared.models.visualization-settings/table-column-enabled true}
78
{:metabase.shared.models.visualization-settings/table-column-name "transaction_reporting_category",
77
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 640 nil],
76
:metabase.shared.models.visualization-settings/table-column-enabled true}
75
{:metabase.shared.models.visualization-settings/table-column-name "transaction_source",
74
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 637 nil],
73
:metabase.shared.models.visualization-settings/table-column-enabled true}
72
{:metabase.shared.models.visualization-settings/table-column-name "transaction_status",
71
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 631 nil],
70
:metabase.shared.models.visualization-settings/table-column-enabled true}
69
{:metabase.shared.models.visualization-settings/table-column-name "transaction_type",
68
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 660 nil],
67
:metabase.shared.models.visualization-settings/table-column-enabled true}
66
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee_ht",
65
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 650 nil],
64
:metabase.shared.models.visualization-settings/table-column-enabled true}
63
{:metabase.shared.models.visualization-settings/table-column-name "transaction_fee_tva",
62
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 663 nil],
61
:metabase.shared.models.visualization-settings/table-column-enabled true}
60
{:metabase.shared.models.visualization-settings/table-column-name "country_code",
59
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 643 nil],
58
:metabase.shared.models.visualization-settings/table-column-enabled true}
57
{:metabase.shared.models.visualization-settings/table-column-name "venue_region",
56
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 649 nil],
55
:metabase.shared.models.visualization-settings/table-column-enabled true}
54
{:metabase.shared.models.visualization-settings/table-column-name "complete_name",
53
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 634 nil],
52
:metabase.shared.models.visualization-settings/table-column-enabled true}
51
{:metabase.shared.models.visualization-settings/table-column-name "siret",
50
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 629 nil],
49
:metabase.shared.models.visualization-settings/table-column-enabled true}
48
{:metabase.shared.models.visualization-settings/table-column-name "rcs",
47
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 648 nil],
46
:metabase.shared.models.visualization-settings/table-column-enabled true}
45
{:metabase.shared.models.visualization-settings/table-column-name "billing_address",
44
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 638 nil],
43
:metabase.shared.models.visualization-settings/table-column-enabled true}
42
{:metabase.shared.models.visualization-settings/table-column-name "city",
41
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 646 nil],
40
:metabase.shared.models.visualization-settings/table-column-enabled true}
39
{:metabase.shared.models.visualization-settings/table-column-name "postal_code",
38
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 633 nil],
37
:metabase.shared.models.visualization-settings/table-column-enabled true}
36
{:metabase.shared.models.visualization-settings/table-column-name "tax_id",
35
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 661 nil],
34
:metabase.shared.models.visualization-settings/table-column-enabled true}
33
{:metabase.shared.models.visualization-settings/table-column-name "phone_number",
32
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 658 nil],
31
:metabase.shared.models.visualization-settings/table-column-enabled true}
30
{:metabase.shared.models.visualization-settings/table-column-name "description",
29
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 654 nil],
28
:metabase.shared.models.visualization-settings/table-column-enabled true}
27
{:metabase.shared.models.visualization-settings/table-column-name "charge_description",
26
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 671 nil],
25
:metabase.shared.models.visualization-settings/table-column-enabled true}
24
{:metabase.shared.models.visualization-settings/table-column-name "email_address",
23
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 664 nil],
22
:metabase.shared.models.visualization-settings/table-column-enabled true}
21
{:metabase.shared.models.visualization-settings/table-column-name "billing_email",
20
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 672 nil],
19
:metabase.shared.models.visualization-settings/table-column-enabled true}
18
{:metabase.shared.models.visualization-settings/table-column-name "card_origin",
17
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 666 nil],
16
:metabase.shared.models.visualization-settings/table-column-enabled true}
15
{:metabase.shared.models.visualization-settings/table-column-name "is_amex",
14
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 636 nil],
13
:metabase.shared.models.visualization-settings/table-column-enabled true}
12
{:metabase.shared.models.visualization-settings/table-column-name "venue_id",
11
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 665 nil],
10
:metabase.shared.models.visualization-settings/table-column-enabled true}
9
{:metabase.shared.models.visualization-settings/table-column-name "payout_fee",
8
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 667 nil],
7
:metabase.shared.models.visualization-settings/table-column-enabled true}
6
{:metabase.shared.models.visualization-settings/table-column-name "ledger_status",
5
:metabase.shared.models.visualization-settings/table-column-field-ref [:field 670 nil],
4
:metabase.shared.models.visualization-settings/table-column-enabled true}]},
3
:info {:executed-by 3, :context :xlsx-download}},
2
:data {:rows [], :cols []}}
```
</details>
Downloaded xslx file content:
```
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>
```
**To Reproduce**
Steps to reproduce the behavior (if you can reproduce the bug using the Sample Database, we will find the issue faster):
1. Launch a question which carries out a big select statement on BigQuery
2. Click on 'Download full results > .xslx'
3. The downloaded results can't be opened
**Expected behavior**
I would expect the downloaded xslx to be healthy whatever the select statement size
**Screenshots/videos**
The downloading button used:

The error when opening the file:
<img width="306" alt="Capture d’écran 2023-03-10 à 11 50 54" src="https://user-images.githubusercontent.com/22528531/224297740-ee15d8bb-028d-4bb4-9557-9ddd479d373e.png">
**Information about your Metabase Installation:**
- Your browser and the version:
"browser-info": {
"language": "fr-FR",
"platform": "MacIntel",
"userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36",
"vendor": "Google Inc."
},
- Your operating system:
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.17+8",
"java.vendor": "Eclipse Adoptium",
"java.vendor.url": "https://adoptium.net/",
"java.version": "11.0.17",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.17+8",
"os.name": "Linux",
"os.version": "5.10.147+",
"user.language": "en",
"user.timezone": "GMT"
},
- Your databases: BigQuery
- Metabase version: 0.45.3
- Metabase hosting environment: Google Kubernetes Engine
- Metabase internal database: Postgres
**Severity**
This issue is not blocking our analytics but is a serious blocker for some critical operations carried out by our business teams
**Additional context**
- Reverting to v0.45.1 solved the issue so the problem was introduced recently.
- The BQ query itself is a simple select statement with a where clause, which takes around 4 seconds
- The downloading itself takes several minutes
- Digging in BQ, it seems we can't download an xslx file from there, and downloading CSV files is limited to 10Mb
- Our BQ query results seem to be around 250 Mb
- Improving the BigQuery table partitioning to lighten pagination calls launched by Metabase during the downloading did not work
Thanks in advance for your help! 🙏
EDIT 2023-03-13: Logs section
|
process
|
big xslx file downloaded from bigquery are corrupted in and were not in describe the bug big xslx file downloaded from bigquery are corrupted in and were okay in see screenshots logs please include javascript console logs open the browser dev tools and check the console tab as well as the server logs metabase logs that you can get from settings admin troubleshooting logs around the time this bug occurred for information about how to get these consult our js console logs downloadbutton jsx post anonymous downloadbutton jsx u runtime js anonymous runtime js anonymous runtime js b downloadbutton styled tsx a downloadbutton styled tsx anonymous downloadbutton styled tsx anonymous downloadbutton styled tsx anonymous downloadbutton jsx onsubmit downloadbutton jsx c react dom production min js f react dom production min js anonymous react dom production min js m react dom production min js ct react dom production min js at react dom production min js lt react dom production min js bt react dom production min js n react dom production min js f react dom production min js en react dom production min js qt react dom production min js t unstable runwithpriority scheduler production min js yi react dom production min js d react dom production min js zt react dom production min js server logs not sure it s related infinity of logs as error middleware catch exceptions error processing query error running query database id started at t via status failed class clojure lang exceptioninfo error error executing query null stacktrace driver bigquery cloud sdk throw invalid query invokestatic bigquery cloud sdk clj driver bigquery cloud sdk throw invalid query invoke bigquery cloud sdk clj driver bigquery cloud sdk execute bigquery invokestatic bigquery cloud sdk clj driver bigquery cloud sdk execute bigquery invoke bigquery cloud sdk clj driver bigquery cloud sdk execute bigquery on db invokestatic bigquery cloud sdk clj driver bigquery cloud sdk execute bigquery on db invoke bigquery cloud sdk clj driver bigquery cloud sdk process native star thunk invoke bigquery cloud sdk clj driver bigquery cloud sdk process native star invokestatic bigquery cloud sdk clj driver bigquery cloud sdk process native star invoke bigquery cloud sdk clj driver bigquery cloud sdk fn invokestatic bigquery cloud sdk clj driver bigquery cloud sdk fn invoke bigquery cloud sdk clj query processor context executef invokestatic context clj query processor context executef invoke context clj query processor context default default runf invokestatic default clj query processor context default default runf invoke default clj query processor context runf invokestatic context clj query processor context runf invoke context clj query processor reducible identity qp invokestatic reducible clj query processor reducible identity qp invoke reducible clj query processor middleware cache maybe return cached results maybe return cached results star invoke cache clj query processor middleware permissions check query permissions fn invoke permissions clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor fn combined post process combined post process star invoke query processor clj query processor fn combined pre process combined pre process star invoke query processor clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star fn invoke reducible clj error type invalid query ex data type invalid query sql metabase userid querytype mbql queryhash nselect invoicing invoicing venues transaction id as transaction id invoicing invoicing venues stripe account id as stripe account id invoicing invoicing venues payout id as payout id invoicing invoicing venues payout created local as payout created local invoicing invoicing venues payout balance transaction as payout balance transaction invoicing invoicing venues payout arrival date as payout arrival date invoicing invoicing venues payout arrival date format as payout arrival date format invoicing invoicing venues payout currency as payout currency invoicing invoicing venues payout amount as payout amount invoicing invoicing venues payout destination as payout destination invoicing invoicing venues payout statement descriptor as payout statement descriptor invoicing invoicing venues payout status as payout status invoicing invoicing venues payout failure code as payout failure code invoicing invoicing venues transaction amount as transaction amount invoicing invoicing venues transaction fee as transaction fee invoicing invoicing venues transaction net as transaction net invoicing invoicing venues transaction created local as transaction created local invoicing invoicing venues transaction currency as transaction currency invoicing invoicing venues transaction description as transaction description invoicing invoicing venues transaction reporting category as transaction reporting category invoicing invoicing venues transaction source as transaction source invoicing invoicing venues transaction status as transaction status invoicing invoicing venues transaction type as transaction type invoicing invoicing venues transaction fee ht as transaction fee ht invoicing invoicing venues transaction fee tva as transaction fee tva invoicing invoicing venues country code as country code invoicing invoicing venues venue region as venue region invoicing invoicing venues complete name as complete name invoicing invoicing venues siret as siret invoicing invoicing venues rcs as rcs invoicing invoicing venues billing address as billing address invoicing invoicing venues city as city invoicing invoicing venues postal code as postal code invoicing invoicing venues tax id as tax id invoicing invoicing venues phone number as phone number invoicing invoicing venues description as description invoicing invoicing venues charge description as charge description invoicing invoicing venues email address as email address invoicing invoicing venues billing email as billing email invoicing invoicing venues card origin as card origin invoicing invoicing venues is amex as is amex invoicing invoicing venues venue id as venue id invoicing invoicing venues payout fee as payout fee invoicing invoicing venues ledger status as ledger status from invoicing invoicing venues where invoicing invoicing venues transaction created local and invoicing invoicing venues transaction created local and invoicing invoicing venues country code limit parameters t t fr error type invalid query json query database query source table filter fr type query middleware process viz settings true skip results metadata true format rows false async true viz settings table pivot false table pivot column payout currency table cell column payout amount table column formatting metabase shared models visualization settings column settings metabase shared models visualization settings table columns metabase shared models visualization settings table column name transaction id metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name stripe account id metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout id metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout created local metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout balance transaction metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout arrival date metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout arrival date format metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout currency metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout amount metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout destination metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout statement descriptor metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout status metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout failure code metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction amount metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction fee metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction net metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction created local metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction currency metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction description metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction reporting category metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction source metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction status metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction type metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction fee ht metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction fee tva metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name country code metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name venue region metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name complete name metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name siret metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name rcs metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name billing address metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name city metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name postal code metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name tax id metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name phone number metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name description metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name charge description metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name email address metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name billing email metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name card origin metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name is amex metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name venue id metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout fee metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name ledger status metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true native query select xxxxx limit params t t fr table name xxxxx mbql true status failed class java util concurrent cancellationexception stacktrace java base java util concurrent futuretask report unknown source java base java util concurrent futuretask get unknown source clojure core deref future invokestatic core clj clojure core future call reify deref core clj clojure core deref invokestatic core clj clojure core deref invoke core clj driver bigquery cloud sdk execute bigquery invokestatic bigquery cloud sdk clj driver bigquery cloud sdk execute bigquery invoke bigquery cloud sdk clj driver bigquery cloud sdk execute bigquery on db invokestatic bigquery cloud sdk clj driver bigquery cloud sdk execute bigquery on db invoke bigquery cloud sdk clj driver bigquery cloud sdk process native star thunk invoke bigquery cloud sdk clj driver bigquery cloud sdk process native star invokestatic bigquery cloud sdk clj driver bigquery cloud sdk process native star invoke bigquery cloud sdk clj driver bigquery cloud sdk fn invokestatic bigquery cloud sdk clj driver bigquery cloud sdk fn invoke bigquery cloud sdk clj query processor context executef invokestatic context clj query processor context executef invoke context clj query processor context default default runf invokestatic default clj query processor context default default runf invoke default clj query processor context runf invokestatic context clj query processor context runf invoke context clj query processor reducible identity qp invokestatic reducible clj query processor reducible identity qp invoke reducible clj query processor middleware cache maybe return cached results maybe return cached results star invoke cache clj query processor middleware permissions check query permissions fn invoke permissions clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor fn combined post process combined post process star invoke query processor clj query processor fn combined pre process combined pre process star invoke query processor clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star fn invoke reducible clj card id nil context xlsx download error nil row count running time preprocessed database query source table filter and default default value fr fr base type type text effective type type text coercion strategy nil semantic type type country database type string name country code fields limit metabase query processor middleware limit original limit nil type query middleware process viz settings true skip results metadata true format rows false async true viz settings table pivot false table pivot column payout currency table cell column payout amount table column formatting metabase shared models visualization settings column settings metabase shared models visualization settings table columns metabase shared models visualization settings table column name transaction id metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name stripe account id metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout id metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout created local metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout balance transaction metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout arrival date metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout arrival date format metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout currency metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout amount metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout destination metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout statement descriptor metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout status metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout failure code metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction amount metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction fee metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction net metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction created local metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction currency metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction description metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction reporting category metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction source metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction status metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction type metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction fee ht metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name transaction fee tva metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name country code metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name venue region metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name complete name metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name siret metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name rcs metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name billing address metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name city metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name postal code metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name tax id metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name phone number metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name description metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name charge description metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name email address metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name billing email metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name card origin metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name is amex metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name venue id metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name payout fee metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true metabase shared models visualization settings table column name ledger status metabase shared models visualization settings table column field ref metabase shared models visualization settings table column enabled true info executed by context xlsx download data rows cols downloaded xslx file content server error error server error the server encountered a temporary error and could not complete your request please try again in seconds to reproduce steps to reproduce the behavior if you can reproduce the bug using the sample database we will find the issue faster launch a question which carries out a big select statement on bigquery click on download full results xslx the downloaded results can t be opened expected behavior i would expect the downloaded xslx to be healthy whatever the select statement size screenshots videos the downloading button used the error when opening the file img width alt capture d’écran à src information about your metabase installation your browser and the version browser info language fr fr platform macintel useragent mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari vendor google inc your operating system system info file encoding utf java runtime name openjdk runtime environment java runtime version java vendor eclipse adoptium java vendor url java version java vm name openjdk bit server vm java vm version os name linux os version user language en user timezone gmt your databases bigquery metabase version metabase hosting environment google kubernetes engine metabase internal database postgres severity this issue is not blocking our analytics but is a serious blocker for some critical operations carried out by our business teams additional context reverting to solved the issue so the problem was introduced recently the bq query itself is a simple select statement with a where clause which takes around seconds the downloading itself takes several minutes digging in bq it seems we can t download an xslx file from there and downloading csv files is limited to our bq query results seem to be around mb improving the bigquery table partitioning to lighten pagination calls launched by metabase during the downloading did not work thanks in advance for your help 🙏 edit logs section
| 1
|
12,139
| 14,741,102,875
|
IssuesEvent
|
2021-01-07 10:06:07
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Speed-E’z - Billing Rate - Dispatching Services
|
anc-process anp-1 ant-bug
|
In GitLab by @kdjstudios on Jan 2, 2019, 07:55
**Submitted by:** Jo Ann Browne <joann@speedez.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6464071
**Server:** External
**Client/Site:** Speed Ez
**Account:** NA
**Issue:**
Way back in April your guys built me a Billing rate called Dispatching Services which all of my SMS, SMS+,text message and Secure text messages were supposed to dump in so they could be billed. Well I am billing right now and I noticed (should have caught this a ling time ago) but if the accounts that have Dispatching Services on it NOTHING is being billed to it. They are only getting billed if they have the wordage of SMS or text messages. This wordage was removed when you finished the Dispatching Services -2051 code many months ago. PLEASE check this for me as it WAS working at one time and no one knows how much money has been lost. Thanks for your help. Oh by the way I cannot bill tomorrow till this is fixed.
|
1.0
|
Speed-E’z - Billing Rate - Dispatching Services - In GitLab by @kdjstudios on Jan 2, 2019, 07:55
**Submitted by:** Jo Ann Browne <joann@speedez.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/6464071
**Server:** External
**Client/Site:** Speed Ez
**Account:** NA
**Issue:**
Way back in April your guys built me a Billing rate called Dispatching Services which all of my SMS, SMS+,text message and Secure text messages were supposed to dump in so they could be billed. Well I am billing right now and I noticed (should have caught this a ling time ago) but if the accounts that have Dispatching Services on it NOTHING is being billed to it. They are only getting billed if they have the wordage of SMS or text messages. This wordage was removed when you finished the Dispatching Services -2051 code many months ago. PLEASE check this for me as it WAS working at one time and no one knows how much money has been lost. Thanks for your help. Oh by the way I cannot bill tomorrow till this is fixed.
|
process
|
speed e’z billing rate dispatching services in gitlab by kdjstudios on jan submitted by jo ann browne helpdesk server external client site speed ez account na issue way back in april your guys built me a billing rate called dispatching services which all of my sms sms text message and secure text messages were supposed to dump in so they could be billed well i am billing right now and i noticed should have caught this a ling time ago but if the accounts that have dispatching services on it nothing is being billed to it they are only getting billed if they have the wordage of sms or text messages this wordage was removed when you finished the dispatching services code many months ago please check this for me as it was working at one time and no one knows how much money has been lost thanks for your help oh by the way i cannot bill tomorrow till this is fixed
| 1
|
45,453
| 12,809,272,253
|
IssuesEvent
|
2020-07-03 15:17:01
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
ON DUPLICATE KEY IGNORE emulation should not apply if primary key is identity
|
C: Functionality E: All Editions P: Medium T: Defect
|
Assuming a table like this:
```sql
CREATE TABLE t_identity_pk (
id INTEGER NOT NULL AUTO_INCREMENT,
val int,
CONSTRAINT pk_t_identity_pk PRIMARY KEY (id)
)
```
When inserting into this table:
```java
ctx.insertInto(T_IDENTITY_PK)
.columns(VAL)
.values(1)
.onDuplicateKeyIgnore()
.execute();
```
Then the `onDuplicateKeyIgnore()` clause can be ignored, because there will never be a duplicate key
|
1.0
|
ON DUPLICATE KEY IGNORE emulation should not apply if primary key is identity - Assuming a table like this:
```sql
CREATE TABLE t_identity_pk (
id INTEGER NOT NULL AUTO_INCREMENT,
val int,
CONSTRAINT pk_t_identity_pk PRIMARY KEY (id)
)
```
When inserting into this table:
```java
ctx.insertInto(T_IDENTITY_PK)
.columns(VAL)
.values(1)
.onDuplicateKeyIgnore()
.execute();
```
Then the `onDuplicateKeyIgnore()` clause can be ignored, because there will never be a duplicate key
|
non_process
|
on duplicate key ignore emulation should not apply if primary key is identity assuming a table like this sql create table t identity pk id integer not null auto increment val int constraint pk t identity pk primary key id when inserting into this table java ctx insertinto t identity pk columns val values onduplicatekeyignore execute then the onduplicatekeyignore clause can be ignored because there will never be a duplicate key
| 0
|
2,991
| 5,886,878,705
|
IssuesEvent
|
2017-05-17 05:06:03
|
Modularr/YAML-FrontMatter
|
https://api.github.com/repos/Modularr/YAML-FrontMatter
|
closed
|
Markdown containing horizontal rows breaks parser
|
bug compatibility
|
If you load a document, for example:
---
title: Some title
date: 2017-05-15
---
This is a markdown document separated by horizontal rows:
---
There is one above!
The FrontMatter will try to split the document by ``---``, and then gives Symphony's YAML the 2nd position of array, which is in this case "This is a markdown (...)", instead of the desired section. If you check the [markdown reference](http://daringfireball.net/projects/markdown/syntax) or the [Github flavored mardown reference](https://help.github.com/articles/organizing-information-with-tables/), you will see that ``---`` appears in other options, such as headers and tables. So when a document includes such formatting, it will cause trouble on FrontMatter.
I hope I'm not being too annoying by opening a new issue :) I use FrontMatter on some projects and it is also of my interest to see your project doing well.
|
True
|
Markdown containing horizontal rows breaks parser - If you load a document, for example:
---
title: Some title
date: 2017-05-15
---
This is a markdown document separated by horizontal rows:
---
There is one above!
The FrontMatter will try to split the document by ``---``, and then gives Symphony's YAML the 2nd position of array, which is in this case "This is a markdown (...)", instead of the desired section. If you check the [markdown reference](http://daringfireball.net/projects/markdown/syntax) or the [Github flavored mardown reference](https://help.github.com/articles/organizing-information-with-tables/), you will see that ``---`` appears in other options, such as headers and tables. So when a document includes such formatting, it will cause trouble on FrontMatter.
I hope I'm not being too annoying by opening a new issue :) I use FrontMatter on some projects and it is also of my interest to see your project doing well.
|
non_process
|
markdown containing horizontal rows breaks parser if you load a document for example title some title date this is a markdown document separated by horizontal rows there is one above the frontmatter will try to split the document by and then gives symphony s yaml the position of array which is in this case this is a markdown instead of the desired section if you check the or the you will see that appears in other options such as headers and tables so when a document includes such formatting it will cause trouble on frontmatter i hope i m not being too annoying by opening a new issue i use frontmatter on some projects and it is also of my interest to see your project doing well
| 0
|
20,907
| 27,749,788,320
|
IssuesEvent
|
2023-03-15 19:46:19
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
opened
|
[processor/spanmetrics] Deprecate processor
|
processor/spanmetrics connector/spanmetrics
|
### Component(s)
connector/spanmetrics, processor/spanmetrics
### Describe the issue you're reporting
This is a tracking issue relating to https://github.com/open-telemetry/opentelemetry-collector/issues/737
The processor has been reimplemented as a connector. (See #18760)
We should deprecate the processor when the code owners believe the time is appropriate.
|
1.0
|
[processor/spanmetrics] Deprecate processor - ### Component(s)
connector/spanmetrics, processor/spanmetrics
### Describe the issue you're reporting
This is a tracking issue relating to https://github.com/open-telemetry/opentelemetry-collector/issues/737
The processor has been reimplemented as a connector. (See #18760)
We should deprecate the processor when the code owners believe the time is appropriate.
|
process
|
deprecate processor component s connector spanmetrics processor spanmetrics describe the issue you re reporting this is a tracking issue relating to the processor has been reimplemented as a connector see we should deprecate the processor when the code owners believe the time is appropriate
| 1
|
308,817
| 26,633,032,738
|
IssuesEvent
|
2023-01-24 19:24:11
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
Test failure: All/DIPSTabHelperBrowserTest.Histograms_StorageThenClick_Incognito/1
|
QA/No release-notes/exclude closed/stale ci-concern bot/type/test bot/arch/x64 bot/channel/nightly bot/platform/linux bot/branch/master
|
Greetings human!
Bad news. `All/DIPSTabHelperBrowserTest.Histograms_StorageThenClick_Incognito/1` [failed on linux x64 nightly master](https://ci.brave.com/job/brave-browser-build-linux-x64-asan/358/testReport/junit/(root)/All_DIPSTabHelperBrowserTest/linux_x64___test_browser_chromium___Histograms_StorageThenClick_Incognito_1).
<details>
<summary>Stack trace</summary>
```
[ RUN ] All/DIPSTabHelperBrowserTest.Histograms_StorageThenClick_Incognito/1
[1874295:1874295:0102/094433.815458:WARNING:chrome_main_delegate.cc(607)] This is Chrome version 109.1.48.104 (not a warning)
[1874295:1874323:0102/094434.236294:ERROR:object_proxy.cc(623)] Failed to call method: org.freedesktop.DBus.Properties.Get: object_path= /org/freedesktop/portal/desktop: org.freedesktop.DBus.Error.InvalidArgs: No such interface “org.freedesktop.portal.FileChooser”
[1874295:1874323:0102/094434.238487:WARNING:property.cc(144)] version: GetAndBlock: failed.
[1874295:1874323:0102/094434.238540:ERROR:select_file_dialog_linux_portal.cc(274)] Failed to read portal version property
[1874295:1874295:0102/094434.307368:WARNING:external_provider_impl.cc(510)] Malformed extension dictionary for extension: odbfpeeihdkbihmopkbjmoonfanlbfcl. Key external_update_url has value "", which is not a valid URL.
libva error: vaGetDriverNameByIndex() failed with unknown libva error, driver_name = (null)
[1874295:1874295:0102/094434.318447:WARNING:bluez_dbus_manager.cc(247)] Floss manager not present, cannot set Floss enable/disable.
[1874332:1874332:0102/094434.324309:WARNING:sandbox_linux.cc(385)] InitializeSandbox() called with multiple threads in process gpu-process.
[1874332:1874332:0102/094434.378156:ERROR:gpu_memory_buffer_support_x11.cc(49)] dri3 extension not supported.
[1874295:1874411:0102/094434.849339:WARNING:embedded_test_server.cc(674)] Request not handled. Returning 404: /favicon.ico
../../base/test/metrics/histogram_tester.cc:90: Failure
Expected equality of these values:
count
Which is: 1
0
Histogram "Privacy.DIPS.TimeFromStorageToInteraction.OffTheRecord_Block3PC" does not exist.
Stack trace:
#0 0x559982e68582 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x29e7f581)
#1 0x559971105313 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x1811c312)
#2 0x559984e5d215 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2be74214)
#3 0x559984e63720 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2be7a71f)
#4 0x55997a848d5a (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2185fd59)
#5 0x55997a84f8a7 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x218668a6)
#6 0x55997a8430cc (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2185a0cb)
#7 0x55998117dc25 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x28194c24)
#8 0x559981180d77 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x28197d76)
#9 0x559981180682 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x28197681)
#10 0x559981178c4e (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2818fc4d)
#11 0x55998117925a (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x28190259)
#12 0x559984e5b5bf (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2be725be)
#13 0x559982a9123b (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x29aa823a)
#14 0x55997110e755 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x18125754)
[ FAILED ] All/DIPSTabHelperBrowserTest.Histograms_StorageThenClick_Incognito/1, where GetParam() = true (1335 ms)
[ FAILED ] All/DIPSTabHelperBrowserTest.Histograms_StorageThenClick_Incognito/1
```
</details>
|
1.0
|
Test failure: All/DIPSTabHelperBrowserTest.Histograms_StorageThenClick_Incognito/1 - Greetings human!
Bad news. `All/DIPSTabHelperBrowserTest.Histograms_StorageThenClick_Incognito/1` [failed on linux x64 nightly master](https://ci.brave.com/job/brave-browser-build-linux-x64-asan/358/testReport/junit/(root)/All_DIPSTabHelperBrowserTest/linux_x64___test_browser_chromium___Histograms_StorageThenClick_Incognito_1).
<details>
<summary>Stack trace</summary>
```
[ RUN ] All/DIPSTabHelperBrowserTest.Histograms_StorageThenClick_Incognito/1
[1874295:1874295:0102/094433.815458:WARNING:chrome_main_delegate.cc(607)] This is Chrome version 109.1.48.104 (not a warning)
[1874295:1874323:0102/094434.236294:ERROR:object_proxy.cc(623)] Failed to call method: org.freedesktop.DBus.Properties.Get: object_path= /org/freedesktop/portal/desktop: org.freedesktop.DBus.Error.InvalidArgs: No such interface “org.freedesktop.portal.FileChooser”
[1874295:1874323:0102/094434.238487:WARNING:property.cc(144)] version: GetAndBlock: failed.
[1874295:1874323:0102/094434.238540:ERROR:select_file_dialog_linux_portal.cc(274)] Failed to read portal version property
[1874295:1874295:0102/094434.307368:WARNING:external_provider_impl.cc(510)] Malformed extension dictionary for extension: odbfpeeihdkbihmopkbjmoonfanlbfcl. Key external_update_url has value "", which is not a valid URL.
libva error: vaGetDriverNameByIndex() failed with unknown libva error, driver_name = (null)
[1874295:1874295:0102/094434.318447:WARNING:bluez_dbus_manager.cc(247)] Floss manager not present, cannot set Floss enable/disable.
[1874332:1874332:0102/094434.324309:WARNING:sandbox_linux.cc(385)] InitializeSandbox() called with multiple threads in process gpu-process.
[1874332:1874332:0102/094434.378156:ERROR:gpu_memory_buffer_support_x11.cc(49)] dri3 extension not supported.
[1874295:1874411:0102/094434.849339:WARNING:embedded_test_server.cc(674)] Request not handled. Returning 404: /favicon.ico
../../base/test/metrics/histogram_tester.cc:90: Failure
Expected equality of these values:
count
Which is: 1
0
Histogram "Privacy.DIPS.TimeFromStorageToInteraction.OffTheRecord_Block3PC" does not exist.
Stack trace:
#0 0x559982e68582 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x29e7f581)
#1 0x559971105313 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x1811c312)
#2 0x559984e5d215 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2be74214)
#3 0x559984e63720 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2be7a71f)
#4 0x55997a848d5a (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2185fd59)
#5 0x55997a84f8a7 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x218668a6)
#6 0x55997a8430cc (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2185a0cb)
#7 0x55998117dc25 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x28194c24)
#8 0x559981180d77 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x28197d76)
#9 0x559981180682 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x28197681)
#10 0x559981178c4e (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2818fc4d)
#11 0x55998117925a (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x28190259)
#12 0x559984e5b5bf (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x2be725be)
#13 0x559982a9123b (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x29aa823a)
#14 0x55997110e755 (/home/ubuntu/workspace/brave-browser-build-linux-x64-asan-nightly/src/out/Release/browser_tests+0x18125754)
[ FAILED ] All/DIPSTabHelperBrowserTest.Histograms_StorageThenClick_Incognito/1, where GetParam() = true (1335 ms)
[ FAILED ] All/DIPSTabHelperBrowserTest.Histograms_StorageThenClick_Incognito/1
```
</details>
|
non_process
|
test failure all dipstabhelperbrowsertest histograms storagethenclick incognito greetings human bad news all dipstabhelperbrowsertest histograms storagethenclick incognito stack trace all dipstabhelperbrowsertest histograms storagethenclick incognito this is chrome version not a warning failed to call method org freedesktop dbus properties get object path org freedesktop portal desktop org freedesktop dbus error invalidargs no such interface “org freedesktop portal filechooser” version getandblock failed failed to read portal version property malformed extension dictionary for extension odbfpeeihdkbihmopkbjmoonfanlbfcl key external update url has value which is not a valid url libva error vagetdrivernamebyindex failed with unknown libva error driver name null floss manager not present cannot set floss enable disable initializesandbox called with multiple threads in process gpu process extension not supported request not handled returning favicon ico base test metrics histogram tester cc failure expected equality of these values count which is histogram privacy dips timefromstoragetointeraction offtherecord does not exist stack trace home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests home ubuntu workspace brave browser build linux asan nightly src out release browser tests all dipstabhelperbrowsertest histograms storagethenclick incognito where getparam true ms all dipstabhelperbrowsertest histograms storagethenclick incognito
| 0
|
125,380
| 16,770,497,197
|
IssuesEvent
|
2021-06-14 14:17:11
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
Notice of Disagreement - UAT
|
NOD UAT backend design frontend vsa vsa-claims-appeals
|
## Issue Description
User acceptance testing for Notice of Disagreement - Phase 1
---
## Tasks
- Conduct UAT sessions, respond to veteran questions on the call, document results, and create tickets to address any issues identified.
## Acceptance Criteria
Completed UAT sessions.
---
|
1.0
|
Notice of Disagreement - UAT - ## Issue Description
User acceptance testing for Notice of Disagreement - Phase 1
---
## Tasks
- Conduct UAT sessions, respond to veteran questions on the call, document results, and create tickets to address any issues identified.
## Acceptance Criteria
Completed UAT sessions.
---
|
non_process
|
notice of disagreement uat issue description user acceptance testing for notice of disagreement phase tasks conduct uat sessions respond to veteran questions on the call document results and create tickets to address any issues identified acceptance criteria completed uat sessions
| 0
|
4,706
| 7,546,022,660
|
IssuesEvent
|
2018-04-18 00:30:06
|
UnbFeelings/unb-feelings-docs
|
https://api.github.com/repos/UnbFeelings/unb-feelings-docs
|
closed
|
Analisar Requisitos - Participantes
|
Processo invalid question
|
Acredito que nesta [atividade](https://github.com/UnbFeelings/unb-feelings-docs/wiki/Processo#2212-analisar-requisitos) do processo existe um erro a respeito dos Participantes.
Aparentemente o cliente não participa dessa atividade, contudo foi alocado como um dos participantes. Um dos critérios da auditoria dessa atividade englobará a participação do cliente, caso ele não esteja presente na execução desta, será uma não conformidade.
Caso tenha sido realmente um erro na descrição da atividade, responda essa issue corrigindo o problema para que o critério da auditoria seja, também, modificados.
@UnbFeelings/process
|
1.0
|
Analisar Requisitos - Participantes - Acredito que nesta [atividade](https://github.com/UnbFeelings/unb-feelings-docs/wiki/Processo#2212-analisar-requisitos) do processo existe um erro a respeito dos Participantes.
Aparentemente o cliente não participa dessa atividade, contudo foi alocado como um dos participantes. Um dos critérios da auditoria dessa atividade englobará a participação do cliente, caso ele não esteja presente na execução desta, será uma não conformidade.
Caso tenha sido realmente um erro na descrição da atividade, responda essa issue corrigindo o problema para que o critério da auditoria seja, também, modificados.
@UnbFeelings/process
|
process
|
analisar requisitos participantes acredito que nesta do processo existe um erro a respeito dos participantes aparentemente o cliente não participa dessa atividade contudo foi alocado como um dos participantes um dos critérios da auditoria dessa atividade englobará a participação do cliente caso ele não esteja presente na execução desta será uma não conformidade caso tenha sido realmente um erro na descrição da atividade responda essa issue corrigindo o problema para que o critério da auditoria seja também modificados unbfeelings process
| 1
|
108,461
| 9,308,272,250
|
IssuesEvent
|
2019-03-25 14:14:29
|
servo/servo
|
https://api.github.com/repos/servo/servo
|
closed
|
Removing iframe with video is causing Servo to crash
|
A-content/media C-has-manual-testcase
|
HTML:
```
<!DOCTYPE html>
<html>
<body>
<iframe id="iframe" src="http://player.vimeo.com/video/60897896?autoplay=1" width="800" height="450"></iframe>
<br>
<button type="button" id="close" onclick="document.getElementById('iframe').remove();">Remove iframe</button>
</body>
</html>
```
Steps to reproduce:
- Load HTML file
- Wait till video starts playing
- click on "Remove iframe" button
Backtrace:
```
ondrej@RYZEN3:~/apps/servo$ ./servo test.html
mesa: for the -simplifycfg-sink-common option: may only occur zero or one times!
mesa: for the -global-isel-abort option: may only occur zero or one times!
called `Result::unwrap()` on an `Err` value: () (thread <unnamed>, at src/libcore/result.rs:997)
[2019-02-24T13:39:55Z ERROR servo] called `Result::unwrap()` on an `Err` value: ()
called `Result::unwrap()` on an `Err` value: RecvError (thread LayoutThread PipelineId { namespace_id: PipelineNamespaceId(1), index: PipelineIndex(1) }, at src/libcore/result.rs:997)
called `Result::unwrap()` on an `Err` value: RecvError (thread ScriptThread PipelineId { namespace_id: PipelineNamespaceId(1), index: PipelineIndex(1) }, at src/libcore/result.rs:997)
Unexpected script channel panic in constellation: RecvError (thread Constellation, at src/libcore/result.rs:997)
[2019-02-24T13:39:55Z ERROR servo] called `Result::unwrap()` on an `Err` value: RecvError
[2019-02-24T13:39:55Z ERROR servo] called `Result::unwrap()` on an `Err` value: RecvError
[2019-02-24T13:39:55Z ERROR servo] Unexpected script channel panic in constellation: RecvError
called `Result::unwrap()` on an `Err` value: Io(Custom { kind: ConnectionReset, error: StringError("All senders for this socket closed") }) (thread StorageManager, at src/libcore/result.rs:997)
[2019-02-24T13:39:55Z ERROR servo] called `Result::unwrap()` on an `Err` value: Io(Custom { kind: ConnectionReset, error: StringError("All senders for this socket closed") })
called `Result::unwrap()` on an `Err` value: Io(Os { code: 104, kind: ConnectionReset, message: "Connection reset by peer" }) (thread <unnamed>, at src/libcore/result.rs:997)
[2019-02-24T13:39:55Z ERROR servo] called `Result::unwrap()` on an `Err` value: Io(Os { code: 104, kind: ConnectionReset, message: "Connection reset by peer" })
fatal runtime error: failed to initiate panic, error 5
Redirecting call to abort() to mozalloc_abort
Stack trace
stack backtrace:
0: 0x55900ee741cd - backtrace::backtrace::trace::h764507ebeb31a659
1: 0x55900ee73022 - backtrace::capture::Backtrace::new::h861a5879c2a14db9
2: 0x55900cbbe9e7 - servo::install_crash_handler::handler::h5c1ab76b9163f044
3: 0x55900f20d78d - WasmFaultHandler
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/mozjs_sys-0.61.6/mozjs/js/src/wasm/WasmSignalHandlers.cpp:1494
4: 0x7f04f43a0dcf - <unknown>
5: 0x55900f355d4e - _Z14mozalloc_abortPKc
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/mozjs_sys-0.61.6/mozjs/memory/mozalloc/mozalloc_abort.cpp:33
6: 0x55900f355d1f - abort
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/mozjs_sys-0.61.6/mozjs/memory/mozalloc/mozalloc_abort.cpp:80
7: 0x55900fbbf3b6 - std::sys::unix::abort_internal::h47b1aab785740ad4
8: 0x55900fbb3d04 - abort
at src/libstd/sys_common/util.rs:19
9: 0x55900fbb504d - rust_panic
at src/libstd/panicking.rs:529
10: 0x55900fbb4f15 - rust_panic_with_hook
at src/libstd/panicking.rs:498
11: 0x55900fbb4961 - continue_panic_fmt
at src/libstd/panicking.rs:385
12: 0x55900fbb4845 - rust_begin_unwind
13: 0x55900fbd419c - panic_fmt
at src/libcore/panicking.rs:85
14: 0x55900dde2182 - core::result::unwrap_failed::h933bb530a367bdc7
15: 0x55900ddb2cff - servo_media_gstreamer::player::PlayerEventObserverList::notify::h7cee81ea583271f7
16: 0x55900dde35b0 - servo_media_gstreamer::player::GStreamerPlayer::setup::{{closure}}::hd2f465fdab422e0b
17: 0x55900ddf5c6e - gstreamer_app::app_sink::trampoline_new_sample::h9eda8be931efd296
18: 0x7f04f466a7a1 - <unknown>
19: 0x7f04f4aa3b1d - <unknown>
20: 0x7f04f4aa4bff - <unknown>
21: 0x7f04f49a5c39 - <unknown>
22: 0x7f04f49aded1 - gst_pad_push
23: 0x7f04f4993cca - gst_proxy_pad_chain_default
24: 0x7f04f49a5c39 - <unknown>
25: 0x7f04f49aded1 - gst_pad_push
26: 0x7f04f4aaedfc - <unknown>
27: 0x7f04f49a5c39 - <unknown>
28: 0x7f04f49aded1 - gst_pad_push
29: 0x7f04f4aaedfc - <unknown>
30: 0x7f04f49a5c39 - <unknown>
31: 0x7f04f49aded1 - gst_pad_push
32: 0x7f04f4aaedfc - <unknown>
33: 0x7f04f49a5c39 - <unknown>
34: 0x7f04f49aded1 - gst_pad_push
35: 0x7f04f4aaedfc - <unknown>
36: 0x7f04f49a5c39 - <unknown>
37: 0x7f04f49aded1 - gst_pad_push
38: 0x7f04f4993cca - gst_proxy_pad_chain_default
39: 0x7f04f49a5c39 - <unknown>
40: 0x7f04f49aded1 - gst_pad_push
41: 0x7f04a81ccb06 - <unknown>
42: 0x7f04f49daf40 - <unknown>
43: 0x7f04f4834ad2 - <unknown>
44: 0x7f04f4834134 - <unknown>
45: 0x7f04f4396163 - start_thread
46: 0x7f04f3b34dee - __clone
47: 0x0 - <unknown>
ondrej@RYZEN3:~/apps/servo$
```
|
1.0
|
Removing iframe with video is causing Servo to crash - HTML:
```
<!DOCTYPE html>
<html>
<body>
<iframe id="iframe" src="http://player.vimeo.com/video/60897896?autoplay=1" width="800" height="450"></iframe>
<br>
<button type="button" id="close" onclick="document.getElementById('iframe').remove();">Remove iframe</button>
</body>
</html>
```
Steps to reproduce:
- Load HTML file
- Wait till video starts playing
- click on "Remove iframe" button
Backtrace:
```
ondrej@RYZEN3:~/apps/servo$ ./servo test.html
mesa: for the -simplifycfg-sink-common option: may only occur zero or one times!
mesa: for the -global-isel-abort option: may only occur zero or one times!
called `Result::unwrap()` on an `Err` value: () (thread <unnamed>, at src/libcore/result.rs:997)
[2019-02-24T13:39:55Z ERROR servo] called `Result::unwrap()` on an `Err` value: ()
called `Result::unwrap()` on an `Err` value: RecvError (thread LayoutThread PipelineId { namespace_id: PipelineNamespaceId(1), index: PipelineIndex(1) }, at src/libcore/result.rs:997)
called `Result::unwrap()` on an `Err` value: RecvError (thread ScriptThread PipelineId { namespace_id: PipelineNamespaceId(1), index: PipelineIndex(1) }, at src/libcore/result.rs:997)
Unexpected script channel panic in constellation: RecvError (thread Constellation, at src/libcore/result.rs:997)
[2019-02-24T13:39:55Z ERROR servo] called `Result::unwrap()` on an `Err` value: RecvError
[2019-02-24T13:39:55Z ERROR servo] called `Result::unwrap()` on an `Err` value: RecvError
[2019-02-24T13:39:55Z ERROR servo] Unexpected script channel panic in constellation: RecvError
called `Result::unwrap()` on an `Err` value: Io(Custom { kind: ConnectionReset, error: StringError("All senders for this socket closed") }) (thread StorageManager, at src/libcore/result.rs:997)
[2019-02-24T13:39:55Z ERROR servo] called `Result::unwrap()` on an `Err` value: Io(Custom { kind: ConnectionReset, error: StringError("All senders for this socket closed") })
called `Result::unwrap()` on an `Err` value: Io(Os { code: 104, kind: ConnectionReset, message: "Connection reset by peer" }) (thread <unnamed>, at src/libcore/result.rs:997)
[2019-02-24T13:39:55Z ERROR servo] called `Result::unwrap()` on an `Err` value: Io(Os { code: 104, kind: ConnectionReset, message: "Connection reset by peer" })
fatal runtime error: failed to initiate panic, error 5
Redirecting call to abort() to mozalloc_abort
Stack trace
stack backtrace:
0: 0x55900ee741cd - backtrace::backtrace::trace::h764507ebeb31a659
1: 0x55900ee73022 - backtrace::capture::Backtrace::new::h861a5879c2a14db9
2: 0x55900cbbe9e7 - servo::install_crash_handler::handler::h5c1ab76b9163f044
3: 0x55900f20d78d - WasmFaultHandler
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/mozjs_sys-0.61.6/mozjs/js/src/wasm/WasmSignalHandlers.cpp:1494
4: 0x7f04f43a0dcf - <unknown>
5: 0x55900f355d4e - _Z14mozalloc_abortPKc
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/mozjs_sys-0.61.6/mozjs/memory/mozalloc/mozalloc_abort.cpp:33
6: 0x55900f355d1f - abort
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/mozjs_sys-0.61.6/mozjs/memory/mozalloc/mozalloc_abort.cpp:80
7: 0x55900fbbf3b6 - std::sys::unix::abort_internal::h47b1aab785740ad4
8: 0x55900fbb3d04 - abort
at src/libstd/sys_common/util.rs:19
9: 0x55900fbb504d - rust_panic
at src/libstd/panicking.rs:529
10: 0x55900fbb4f15 - rust_panic_with_hook
at src/libstd/panicking.rs:498
11: 0x55900fbb4961 - continue_panic_fmt
at src/libstd/panicking.rs:385
12: 0x55900fbb4845 - rust_begin_unwind
13: 0x55900fbd419c - panic_fmt
at src/libcore/panicking.rs:85
14: 0x55900dde2182 - core::result::unwrap_failed::h933bb530a367bdc7
15: 0x55900ddb2cff - servo_media_gstreamer::player::PlayerEventObserverList::notify::h7cee81ea583271f7
16: 0x55900dde35b0 - servo_media_gstreamer::player::GStreamerPlayer::setup::{{closure}}::hd2f465fdab422e0b
17: 0x55900ddf5c6e - gstreamer_app::app_sink::trampoline_new_sample::h9eda8be931efd296
18: 0x7f04f466a7a1 - <unknown>
19: 0x7f04f4aa3b1d - <unknown>
20: 0x7f04f4aa4bff - <unknown>
21: 0x7f04f49a5c39 - <unknown>
22: 0x7f04f49aded1 - gst_pad_push
23: 0x7f04f4993cca - gst_proxy_pad_chain_default
24: 0x7f04f49a5c39 - <unknown>
25: 0x7f04f49aded1 - gst_pad_push
26: 0x7f04f4aaedfc - <unknown>
27: 0x7f04f49a5c39 - <unknown>
28: 0x7f04f49aded1 - gst_pad_push
29: 0x7f04f4aaedfc - <unknown>
30: 0x7f04f49a5c39 - <unknown>
31: 0x7f04f49aded1 - gst_pad_push
32: 0x7f04f4aaedfc - <unknown>
33: 0x7f04f49a5c39 - <unknown>
34: 0x7f04f49aded1 - gst_pad_push
35: 0x7f04f4aaedfc - <unknown>
36: 0x7f04f49a5c39 - <unknown>
37: 0x7f04f49aded1 - gst_pad_push
38: 0x7f04f4993cca - gst_proxy_pad_chain_default
39: 0x7f04f49a5c39 - <unknown>
40: 0x7f04f49aded1 - gst_pad_push
41: 0x7f04a81ccb06 - <unknown>
42: 0x7f04f49daf40 - <unknown>
43: 0x7f04f4834ad2 - <unknown>
44: 0x7f04f4834134 - <unknown>
45: 0x7f04f4396163 - start_thread
46: 0x7f04f3b34dee - __clone
47: 0x0 - <unknown>
ondrej@RYZEN3:~/apps/servo$
```
|
non_process
|
removing iframe with video is causing servo to crash html remove iframe steps to reproduce load html file wait till video starts playing click on remove iframe button backtrace ondrej apps servo servo test html mesa for the simplifycfg sink common option may only occur zero or one times mesa for the global isel abort option may only occur zero or one times called result unwrap on an err value thread at src libcore result rs called result unwrap on an err value called result unwrap on an err value recverror thread layoutthread pipelineid namespace id pipelinenamespaceid index pipelineindex at src libcore result rs called result unwrap on an err value recverror thread scriptthread pipelineid namespace id pipelinenamespaceid index pipelineindex at src libcore result rs unexpected script channel panic in constellation recverror thread constellation at src libcore result rs called result unwrap on an err value recverror called result unwrap on an err value recverror unexpected script channel panic in constellation recverror called result unwrap on an err value io custom kind connectionreset error stringerror all senders for this socket closed thread storagemanager at src libcore result rs called result unwrap on an err value io custom kind connectionreset error stringerror all senders for this socket closed called result unwrap on an err value io os code kind connectionreset message connection reset by peer thread at src libcore result rs called result unwrap on an err value io os code kind connectionreset message connection reset by peer fatal runtime error failed to initiate panic error redirecting call to abort to mozalloc abort stack trace stack backtrace backtrace backtrace trace backtrace capture backtrace new servo install crash handler handler wasmfaulthandler at root cargo registry src github com mozjs sys mozjs js src wasm wasmsignalhandlers cpp abortpkc at root cargo registry src github com mozjs sys mozjs memory mozalloc mozalloc abort cpp abort at root cargo registry src github com mozjs sys mozjs memory mozalloc mozalloc abort cpp std sys unix abort internal abort at src libstd sys common util rs rust panic at src libstd panicking rs rust panic with hook at src libstd panicking rs continue panic fmt at src libstd panicking rs rust begin unwind panic fmt at src libcore panicking rs core result unwrap failed servo media gstreamer player playereventobserverlist notify servo media gstreamer player gstreamerplayer setup closure gstreamer app app sink trampoline new sample gst pad push gst proxy pad chain default gst pad push gst pad push gst pad push gst pad push gst pad push gst proxy pad chain default gst pad push start thread clone ondrej apps servo
| 0
|
486
| 2,924,992,253
|
IssuesEvent
|
2015-06-26 00:26:36
|
camsci/meteor-pi
|
https://api.github.com/repos/camsci/meteor-pi
|
closed
|
How do we store the path of a meteor?
|
domain:Image Processing domain:Modelling
|
The video processing software can now produce a list of all frames in which a meteor appears. A JSON-encoded list of [x,y,time] data can be up to 16kB. Where do we store this? As a metadata field in the database, or in a file that we associate with the event?
Do we still want to store three points extracted from this list in the database as a means of rendering quick-and-dirty Bezier curves through meteors, without having to fetch another chunk of data?
|
1.0
|
How do we store the path of a meteor? - The video processing software can now produce a list of all frames in which a meteor appears. A JSON-encoded list of [x,y,time] data can be up to 16kB. Where do we store this? As a metadata field in the database, or in a file that we associate with the event?
Do we still want to store three points extracted from this list in the database as a means of rendering quick-and-dirty Bezier curves through meteors, without having to fetch another chunk of data?
|
process
|
how do we store the path of a meteor the video processing software can now produce a list of all frames in which a meteor appears a json encoded list of data can be up to where do we store this as a metadata field in the database or in a file that we associate with the event do we still want to store three points extracted from this list in the database as a means of rendering quick and dirty bezier curves through meteors without having to fetch another chunk of data
| 1
|
204,807
| 23,282,647,209
|
IssuesEvent
|
2022-08-05 13:34:30
|
honeycombio/libhoney-java
|
https://api.github.com/repos/honeycombio/libhoney-java
|
opened
|
Vulnerabilities in latest release (1.5.0)
|
type: security
|
**Versions**
- Java: JDK 8u35
- Libhoney: 1.5.0
**Description**
I noticed Master has more recent library upgrades, but the latest maven repository release has dependencies with known vulnerabilities:
jackson.core -> jackson-databind 0:2.13.1:
https://nvd.nist.gov/vuln/detail/CVE-2020-36518
https://github.com/advisories/GHSA-57j2-w4cx-62h2
https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-2421244
commons-codec:commons-codec:0:1.11
https://snyk.io/vuln/SNYK-JAVA-COMMONSCODEC-561518
I was wondering if that would be a reasonable prompt for a new release.
|
True
|
Vulnerabilities in latest release (1.5.0) - **Versions**
- Java: JDK 8u35
- Libhoney: 1.5.0
**Description**
I noticed Master has more recent library upgrades, but the latest maven repository release has dependencies with known vulnerabilities:
jackson.core -> jackson-databind 0:2.13.1:
https://nvd.nist.gov/vuln/detail/CVE-2020-36518
https://github.com/advisories/GHSA-57j2-w4cx-62h2
https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-2421244
commons-codec:commons-codec:0:1.11
https://snyk.io/vuln/SNYK-JAVA-COMMONSCODEC-561518
I was wondering if that would be a reasonable prompt for a new release.
|
non_process
|
vulnerabilities in latest release versions java jdk libhoney description i noticed master has more recent library upgrades but the latest maven repository release has dependencies with known vulnerabilities jackson core jackson databind commons codec commons codec i was wondering if that would be a reasonable prompt for a new release
| 0
|
106,287
| 9,126,110,048
|
IssuesEvent
|
2019-02-24 19:03:27
|
svigerske/Bt
|
https://api.github.com/repos/svigerske/Bt
|
closed
|
Errors on page https://projects.coin-or.org/CoinHelp
|
bug configuration tests major
|
Issue created by migration from Trac.
Original creator: jpfasano
Original creation time: 2007-08-07 17:19:38
Assignee: andreasw
Version: 0.5
https://projects.coin-or.org/CoinHelp displays error messages:
```
Error: Failed to load processor TOC
No macro or processor named 'TOC' found
```
|
1.0
|
Errors on page https://projects.coin-or.org/CoinHelp - Issue created by migration from Trac.
Original creator: jpfasano
Original creation time: 2007-08-07 17:19:38
Assignee: andreasw
Version: 0.5
https://projects.coin-or.org/CoinHelp displays error messages:
```
Error: Failed to load processor TOC
No macro or processor named 'TOC' found
```
|
non_process
|
errors on page issue created by migration from trac original creator jpfasano original creation time assignee andreasw version displays error messages error failed to load processor toc no macro or processor named toc found
| 0
|
17,989
| 24,010,011,502
|
IssuesEvent
|
2022-09-14 17:55:25
|
GoogleContainerTools/jib-extensions
|
https://api.github.com/repos/GoogleContainerTools/jib-extensions
|
closed
|
jib-native-image-extension-gradle not released
|
kind/user-question priority/p2 process
|
Hi,
Please release the submodule `jib-native-image-extension-gradle` as it is not yet released.
Thanks,
Ashesh Saraf
|
1.0
|
jib-native-image-extension-gradle not released - Hi,
Please release the submodule `jib-native-image-extension-gradle` as it is not yet released.
Thanks,
Ashesh Saraf
|
process
|
jib native image extension gradle not released hi please release the submodule jib native image extension gradle as it is not yet released thanks ashesh saraf
| 1
|
20,922
| 6,122,548,341
|
IssuesEvent
|
2017-06-23 00:15:08
|
ganeti/ganeti
|
https://api.github.com/repos/ganeti/ganeti
|
closed
|
IPolicy break instance creation with disk adoption
|
imported_from_google_code Status:Released Type-Defect
|
Originally reported of Google Code with ID 255.
```
What software version are you running? Please provide the output of "gnt-
cluster --version" and "gnt-cluster version".
gnt-cluster (ganeti v2.6.0) 2.6.0
<b>What distribution are you using?</b>
Debian (wheezy/sid)
<b>What steps will reproduce the problem?</b>
1.Attempt to create a new instance (gnt-instance add -o image+default -t plain -n vmhost-0 --disk 0:adopt=imagetest,vg=ganeti -B memory=1GB --no-name-check --no-ip-check --no-start --no-install imagetest)
<b>What is the expected output? What do you see instead?</b>
Expected: instance created without issue as the disk being adopted is within the range of accepted disk sizes specified via IPolicy.
Actual:
Failure: prerequisites not met for this operation:
error type: wrong_input, error details:
Instance allocation to group 6e1b2abd-69c2-4b31-b2df-f96aa5122d76 violates policy: disk-size/0 value 0 is not in range [1024, 1048576]
<b>Please provide any additional information below.</b>
IPolicy:
Instance policy - limits for instances:
- std
cpu-count: 1
disk-count: 1
disk-size: 1024
memory-size: 128
nic-count: 1
spindle-use: 1
- max
cpu-count: 8
disk-count: 16
disk-size: 1048576
memory-size: 32768
nic-count: 8
spindle-use: 12
- min
cpu-count: 1
disk-count: 1
disk-size: 1024
memory-size: 128
nic-count: 1
spindle-use: 1
- enabled disk templates: sharedfile, diskless, plain, blockdev, drbd, file, rbd
- vcpu-ratio: 4.0
- spindle-ratio: 32.0
VG Info:
--- Logical volume ---
LV Name /dev/ganeti/imagetest
VG Name ganeti
LV UUID Rg2ie1-UgJ1-BK4N-Aae4-0Vci-gIBq-kldkSI
LV Write Access read/write
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:22
```
Originally added on 2012-08-06 18:35:02 +0000 UTC.
|
1.0
|
IPolicy break instance creation with disk adoption - Originally reported of Google Code with ID 255.
```
What software version are you running? Please provide the output of "gnt-
cluster --version" and "gnt-cluster version".
gnt-cluster (ganeti v2.6.0) 2.6.0
<b>What distribution are you using?</b>
Debian (wheezy/sid)
<b>What steps will reproduce the problem?</b>
1.Attempt to create a new instance (gnt-instance add -o image+default -t plain -n vmhost-0 --disk 0:adopt=imagetest,vg=ganeti -B memory=1GB --no-name-check --no-ip-check --no-start --no-install imagetest)
<b>What is the expected output? What do you see instead?</b>
Expected: instance created without issue as the disk being adopted is within the range of accepted disk sizes specified via IPolicy.
Actual:
Failure: prerequisites not met for this operation:
error type: wrong_input, error details:
Instance allocation to group 6e1b2abd-69c2-4b31-b2df-f96aa5122d76 violates policy: disk-size/0 value 0 is not in range [1024, 1048576]
<b>Please provide any additional information below.</b>
IPolicy:
Instance policy - limits for instances:
- std
cpu-count: 1
disk-count: 1
disk-size: 1024
memory-size: 128
nic-count: 1
spindle-use: 1
- max
cpu-count: 8
disk-count: 16
disk-size: 1048576
memory-size: 32768
nic-count: 8
spindle-use: 12
- min
cpu-count: 1
disk-count: 1
disk-size: 1024
memory-size: 128
nic-count: 1
spindle-use: 1
- enabled disk templates: sharedfile, diskless, plain, blockdev, drbd, file, rbd
- vcpu-ratio: 4.0
- spindle-ratio: 32.0
VG Info:
--- Logical volume ---
LV Name /dev/ganeti/imagetest
VG Name ganeti
LV UUID Rg2ie1-UgJ1-BK4N-Aae4-0Vci-gIBq-kldkSI
LV Write Access read/write
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:22
```
Originally added on 2012-08-06 18:35:02 +0000 UTC.
|
non_process
|
ipolicy break instance creation with disk adoption originally reported of google code with id what software version are you running please provide the output of gnt cluster version and gnt cluster version gnt cluster ganeti what distribution are you using debian wheezy sid what steps will reproduce the problem attempt to create a new instance gnt instance add o image default t plain n vmhost disk adopt imagetest vg ganeti b memory no name check no ip check no start no install imagetest what is the expected output what do you see instead expected instance created without issue as the disk being adopted is within the range of accepted disk sizes specified via ipolicy actual failure prerequisites not met for this operation error type wrong input error details instance allocation to group violates policy disk size value is not in range please provide any additional information below ipolicy instance policy limits for instances std cpu count disk count disk size memory size nic count spindle use max cpu count disk count disk size memory size nic count spindle use min cpu count disk count disk size memory size nic count spindle use enabled disk templates sharedfile diskless plain blockdev drbd file rbd vcpu ratio spindle ratio vg info logical volume lv name dev ganeti imagetest vg name ganeti lv uuid gibq kldksi lv write access read write lv status available open lv size gib current le segments allocation inherit read ahead sectors auto currently set to block device originally added on utc
| 0
|
5,768
| 8,609,848,968
|
IssuesEvent
|
2018-11-19 01:14:46
|
carloseduardov8/Viajato
|
https://api.github.com/repos/carloseduardov8/Viajato
|
closed
|
Implementar reserva em hotel
|
Priority:Very High Process:Implement Requirement
|
Permitir ao usuário reserva em hotel, considerando hotel e tipo de quarto
|
1.0
|
Implementar reserva em hotel - Permitir ao usuário reserva em hotel, considerando hotel e tipo de quarto
|
process
|
implementar reserva em hotel permitir ao usuário reserva em hotel considerando hotel e tipo de quarto
| 1
|
104,274
| 22,620,015,102
|
IssuesEvent
|
2022-06-30 04:58:18
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
closed
|
[Bug]: Blinking text cursor is seen inside an already selected dropdown in Welcome tutorial
|
Bug Low Production Needs Triaging BE Coders Pod New Developers Pod Welcome Screen
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Blinking text cursor is seen inside an already selected dropdown in step 1 of Welcome tutorial. Also no hover highlight is observed for the remaining drop down option.
https://loom.com/share/00ff01b0fc7a4977be7fc737cd7406a2

### Steps To Reproduce
1. Start the Welcome tutorial
2. In step 1 > click on Mock Database drop down and observe the blinking cursor
### Public Sample App
_No response_
### Version
Cloud
|
1.0
|
[Bug]: Blinking text cursor is seen inside an already selected dropdown in Welcome tutorial - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Blinking text cursor is seen inside an already selected dropdown in step 1 of Welcome tutorial. Also no hover highlight is observed for the remaining drop down option.
https://loom.com/share/00ff01b0fc7a4977be7fc737cd7406a2

### Steps To Reproduce
1. Start the Welcome tutorial
2. In step 1 > click on Mock Database drop down and observe the blinking cursor
### Public Sample App
_No response_
### Version
Cloud
|
non_process
|
blinking text cursor is seen inside an already selected dropdown in welcome tutorial is there an existing issue for this i have searched the existing issues description blinking text cursor is seen inside an already selected dropdown in step of welcome tutorial also no hover highlight is observed for the remaining drop down option steps to reproduce start the welcome tutorial in step click on mock database drop down and observe the blinking cursor public sample app no response version cloud
| 0
|
251,943
| 27,218,206,126
|
IssuesEvent
|
2023-02-21 01:15:06
|
rsoreq/WebGoat
|
https://api.github.com/repos/rsoreq/WebGoat
|
closed
|
CVE-2021-39150 (High) detected in xstream-1.4.5.jar - autoclosed
|
security vulnerability
|
## CVE-2021-39150 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Path to dependency file: /webgoat-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rsoreq/WebGoat/commit/31ee732ed75f4ea201663917b565c1042cec5c7d">31ee732ed75f4ea201663917b565c1042cec5c7d</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream with a Java runtime version 14 to 8. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the [Security Framework](https://x-stream.github.io/security.html#framework), you will have to use at least version 1.4.18.
<p>Publish Date: 2021-08-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-39150>CVE-2021-39150</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-hph2-m3g5-xxv4">https://github.com/x-stream/xstream/security/advisories/GHSA-hph2-m3g5-xxv4</a></p>
<p>Release Date: 2021-08-23</p>
<p>Fix Resolution: 1.4.18</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2021-39150 (High) detected in xstream-1.4.5.jar - autoclosed - ## CVE-2021-39150 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Path to dependency file: /webgoat-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **xstream-1.4.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rsoreq/WebGoat/commit/31ee732ed75f4ea201663917b565c1042cec5c7d">31ee732ed75f4ea201663917b565c1042cec5c7d</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream with a Java runtime version 14 to 8. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the [Security Framework](https://x-stream.github.io/security.html#framework), you will have to use at least version 1.4.18.
<p>Publish Date: 2021-08-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-39150>CVE-2021-39150</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-hph2-m3g5-xxv4">https://github.com/x-stream/xstream/security/advisories/GHSA-hph2-m3g5-xxv4</a></p>
<p>Release Date: 2021-08-23</p>
<p>Fix Resolution: 1.4.18</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_process
|
cve high detected in xstream jar autoclosed cve high severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back path to dependency file webgoat server pom xml path to vulnerable library home wss scanner repository com thoughtworks xstream xstream xstream jar home wss scanner repository com thoughtworks xstream xstream xstream jar repository com thoughtworks xstream xstream xstream jar dependency hierarchy x xstream jar vulnerable library found in head commit a href found in base branch develop vulnerability details xstream is a simple library to serialize objects to xml and back again in affected versions this vulnerability may allow a remote attacker to request data from internal resources that are not publicly available only by manipulating the processed input stream with a java runtime version to no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types if you rely on xstream s default blacklist of the you will have to use at least version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue
| 0
|
626,084
| 19,784,750,550
|
IssuesEvent
|
2022-01-18 04:30:52
|
lokka30/LevelledMobs
|
https://api.github.com/repos/lokka30/LevelledMobs
|
closed
|
Hardcore Featureset
|
type: improvement priority: normal status: confirmed
|
```yaml
apply-settings:
hardcore:
deathboost: true
secondchance:
count: 1
health_remaining: 10
health_restored: 20
rollattributes:
challenge-rating: 1.0
max-health: true
movement-speed: true
attack-damage: true
mobraid: true
endermanwarp: true
```
This system could be expanded upon with more features in the future. The reason I suggest separating them is because I believe these settings give entities an extra advantage beyond what we normally would apply otherwise.
The first suggestion regarding `deathboost:` ; when this is enabled, if an entity kills a player, that entity will be levelled up by one level and fully healed up (can be 'relevelled', under current arrangements, the health and attributes would correct themselves). This can be represented as a `true` or `false`, or can be numerical chance `0.0` through `1.0` to apply.
---
The second suggestion regarding `secondchance:` ; when this is enabled, LM would detect when an entity is at `health_remaining:` percent health, and then restore `health_restored:` percent of health to the entity, a maximum of `count:` times.
[https://hub.spigotmc.org/javadocs/bukkit/org/bukkit/event/entity/EntityResurrectEvent.html](https://hub.spigotmc.org/javadocs/bukkit/org/bukkit/event/entity/EntityResurrectEvent.html)
---
The third suggestion regarding `rollattributes:` ; when this is enabled, LM would select a value between the default and the configured result, and replace the attribute with this new value; then levelling can occur. This applies a randomness to all listed attributes to provide a challenge rating.
For example, if an entity would spawn with 40hp, and the `challenge-rating: 1` , then `(40 + (40 x 1)) = 80` ; LM then selects a value between `40-80` and replaces the default before levelling.
---
The fourth suggestion regarding `mobraid:` ; when an entity is attacked by a player, any other entity within 16 blocks will also know where the player is located and head for that direction.
---
The fifth suggestion regarding `endermanwarp:` ; when an entity is hit by a player, it is teleported in the same fashion that an Enderman would teleport to protect itself. Got the idea from this conversation (includes some beginning code steps):
[https://www.spigotmc.org/threads/entity-teleport-like-enderman-help.510001/](https://www.spigotmc.org/threads/entity-teleport-like-enderman-help.510001/)
|
1.0
|
Hardcore Featureset - ```yaml
apply-settings:
hardcore:
deathboost: true
secondchance:
count: 1
health_remaining: 10
health_restored: 20
rollattributes:
challenge-rating: 1.0
max-health: true
movement-speed: true
attack-damage: true
mobraid: true
endermanwarp: true
```
This system could be expanded upon with more features in the future. The reason I suggest separating them is because I believe these settings give entities an extra advantage beyond what we normally would apply otherwise.
The first suggestion regarding `deathboost:` ; when this is enabled, if an entity kills a player, that entity will be levelled up by one level and fully healed up (can be 'relevelled', under current arrangements, the health and attributes would correct themselves). This can be represented as a `true` or `false`, or can be numerical chance `0.0` through `1.0` to apply.
---
The second suggestion regarding `secondchance:` ; when this is enabled, LM would detect when an entity is at `health_remaining:` percent health, and then restore `health_restored:` percent of health to the entity, a maximum of `count:` times.
[https://hub.spigotmc.org/javadocs/bukkit/org/bukkit/event/entity/EntityResurrectEvent.html](https://hub.spigotmc.org/javadocs/bukkit/org/bukkit/event/entity/EntityResurrectEvent.html)
---
The third suggestion regarding `rollattributes:` ; when this is enabled, LM would select a value between the default and the configured result, and replace the attribute with this new value; then levelling can occur. This applies a randomness to all listed attributes to provide a challenge rating.
For example, if an entity would spawn with 40hp, and the `challenge-rating: 1` , then `(40 + (40 x 1)) = 80` ; LM then selects a value between `40-80` and replaces the default before levelling.
---
The fourth suggestion regarding `mobraid:` ; when an entity is attacked by a player, any other entity within 16 blocks will also know where the player is located and head for that direction.
---
The fifth suggestion regarding `endermanwarp:` ; when an entity is hit by a player, it is teleported in the same fashion that an Enderman would teleport to protect itself. Got the idea from this conversation (includes some beginning code steps):
[https://www.spigotmc.org/threads/entity-teleport-like-enderman-help.510001/](https://www.spigotmc.org/threads/entity-teleport-like-enderman-help.510001/)
|
non_process
|
hardcore featureset yaml apply settings hardcore deathboost true secondchance count health remaining health restored rollattributes challenge rating max health true movement speed true attack damage true mobraid true endermanwarp true this system could be expanded upon with more features in the future the reason i suggest separating them is because i believe these settings give entities an extra advantage beyond what we normally would apply otherwise the first suggestion regarding deathboost when this is enabled if an entity kills a player that entity will be levelled up by one level and fully healed up can be relevelled under current arrangements the health and attributes would correct themselves this can be represented as a true or false or can be numerical chance through to apply the second suggestion regarding secondchance when this is enabled lm would detect when an entity is at health remaining percent health and then restore health restored percent of health to the entity a maximum of count times the third suggestion regarding rollattributes when this is enabled lm would select a value between the default and the configured result and replace the attribute with this new value then levelling can occur this applies a randomness to all listed attributes to provide a challenge rating for example if an entity would spawn with and the challenge rating then x lm then selects a value between and replaces the default before levelling the fourth suggestion regarding mobraid when an entity is attacked by a player any other entity within blocks will also know where the player is located and head for that direction the fifth suggestion regarding endermanwarp when an entity is hit by a player it is teleported in the same fashion that an enderman would teleport to protect itself got the idea from this conversation includes some beginning code steps
| 0
|
304,543
| 9,333,421,473
|
IssuesEvent
|
2019-03-28 14:23:31
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
addons.mozilla.org - see bug description
|
browser-firefox-mobile browser-focus-geckoview priority-important
|
<!-- @browser: Firefox Mobile 65.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:65.0) Gecko/65.0 Firefox/65.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://addons.mozilla.org/en-US/firefox/addon/soundfixer/?src=featured
**Browser / Version**: Firefox Mobile 65.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: app VPN TUNNELBEAR ...confirms & verifies who l am, says "welcome to tunnelbear" etc.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
addons.mozilla.org - see bug description - <!-- @browser: Firefox Mobile 65.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:65.0) Gecko/65.0 Firefox/65.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://addons.mozilla.org/en-US/firefox/addon/soundfixer/?src=featured
**Browser / Version**: Firefox Mobile 65.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: app VPN TUNNELBEAR ...confirms & verifies who l am, says "welcome to tunnelbear" etc.
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
addons mozilla org see bug description url browser version firefox mobile operating system android tested another browser unknown problem type something else description app vpn tunnelbear confirms verifies who l am says welcome to tunnelbear etc steps to reproduce browser configuration none from with ❤️
| 0
|
6,516
| 9,604,539,222
|
IssuesEvent
|
2019-05-10 20:19:36
|
robertwpaul/lintol-challenge-1
|
https://api.github.com/repos/robertwpaul/lintol-challenge-1
|
closed
|
Add Credit Card Processor
|
good first issue processor
|
Create a suitable regex, library, or implementation to credit cards within the dataset.
Example PR: https://github.com/robertwpaul/lintol-challenge-1/pull/33
**Acceptance Criteria**
* Adds a processor
* Adds unit tests
* Adds a processor specific doc/
* Links processor to README.md
|
1.0
|
Add Credit Card Processor - Create a suitable regex, library, or implementation to credit cards within the dataset.
Example PR: https://github.com/robertwpaul/lintol-challenge-1/pull/33
**Acceptance Criteria**
* Adds a processor
* Adds unit tests
* Adds a processor specific doc/
* Links processor to README.md
|
process
|
add credit card processor create a suitable regex library or implementation to credit cards within the dataset example pr acceptance criteria adds a processor adds unit tests adds a processor specific doc links processor to readme md
| 1
|
201,671
| 15,806,843,719
|
IssuesEvent
|
2021-04-04 07:33:44
|
AY2021S2-CS2103-W16-1/tp
|
https://api.github.com/repos/AY2021S2-CS2103-W16-1/tp
|
closed
|
[PE-D] User guide feature list is messy
|
documentation
|
There are many features in your application and while they may be split between tasks and events, it is hard to navigate and look for the function i want. Perhaps you can split it into more levels so that it is easy to navigate.

<!--session: 1617431428583-5cacf3ce-be79-46b8-9a88-163f327fd415-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: RuiXiong2211/ped#10
|
1.0
|
[PE-D] User guide feature list is messy - There are many features in your application and while they may be split between tasks and events, it is hard to navigate and look for the function i want. Perhaps you can split it into more levels so that it is easy to navigate.

<!--session: 1617431428583-5cacf3ce-be79-46b8-9a88-163f327fd415-->
-------------
Labels: `severity.Low` `type.DocumentationBug`
original: RuiXiong2211/ped#10
|
non_process
|
user guide feature list is messy there are many features in your application and while they may be split between tasks and events it is hard to navigate and look for the function i want perhaps you can split it into more levels so that it is easy to navigate labels severity low type documentationbug original ped
| 0
|
435
| 2,699,743,806
|
IssuesEvent
|
2015-04-03 19:28:28
|
dart-lang/dartdoc
|
https://api.github.com/repos/dart-lang/dartdoc
|
closed
|
Instructions for generating SDK docs
|
Infrastructure P1
|
Hi @keertip can you update the README with instructions for how to generate the SDK docs? I'd like to start testing. Thanks!
|
1.0
|
Instructions for generating SDK docs - Hi @keertip can you update the README with instructions for how to generate the SDK docs? I'd like to start testing. Thanks!
|
non_process
|
instructions for generating sdk docs hi keertip can you update the readme with instructions for how to generate the sdk docs i d like to start testing thanks
| 0
|
11,004
| 13,792,963,022
|
IssuesEvent
|
2020-10-09 14:19:10
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Memory leak for restore and input pipe -- FIXED
|
bug log-processing
|
I tried hard to find this bug for 2 weeks. So simple but so difficult to find where.
I used several tools for that, like ` valgrind`and `CLang` with `AddressSanitizer`.
I managed to reduce my test to an `LOG` with 4 lines and found the leak.
In 4th record than occurs the error.
How to reproduce the problem:
`mkdir db`
`echo -n '' | goaccess --persist --db-path=./db --output=./a.html -`
`cat logfile | goaccess`**`--restore`**`--persist --db-path=./db --output=./a.html`**`-`**
But, In another way, **no error** occurs:
`mkdir db`
`echo -n '' | goaccess --persist --db-path=./db --output=./a.html -`
`goaccess`**`--restore`**`--persist --db-path=./db --output=./a.html`**`logfile`**
or
`mkdir db`
`echo -n '' | goaccess --persist --db-path=./db --output=./a.html -`
`cat logfile | goaccess --persist --db-path=./db --output=./a.html`**`-`**
So I found the code below, at `parse.c` around `2530`:
` ` `/* it's a pipe, then use the last parsed timestamp */`
` ` `ts = mktime (&logitem->dt);`
` ` `if (conf.restore && !glog->inode && last > 0 && last >= ts)`
` ` ` ` ` ` **`return 0;`**
Oops! Maybe should must be:
` ` `/* it's a pipe, then use the last parsed timestamp */`
` ` `ts = mktime (&logitem->dt);`
` ` `if (conf.restore && !glog->inode && last > 0 && last >= ts)`**`{`**
` ` ` ` ` ` **`ret = 0;`**
` ` ` ` ` ` **`goto cleanup;`**
` ` **`}`**
I thought and rethought several times ... so I removed the code, and it became:
` ` `ts = mktime (&logitem->dt);`
I believe that there should be no differences between `PIPE` and normal `LOG` file.
It is very common for me to apply a filter with `awk` and use the `PIPE`, even for old `LOGs` [ offline ].
I think that you should have used this conditional as an **optimization**, for **real-time** processing.
**PS**: In my `LOG` of 1.8 million records I had a leak of more than **2 Gigabytes** of memory.
All Right. See you soon.
|
1.0
|
Memory leak for restore and input pipe -- FIXED - I tried hard to find this bug for 2 weeks. So simple but so difficult to find where.
I used several tools for that, like ` valgrind`and `CLang` with `AddressSanitizer`.
I managed to reduce my test to an `LOG` with 4 lines and found the leak.
In 4th record than occurs the error.
How to reproduce the problem:
`mkdir db`
`echo -n '' | goaccess --persist --db-path=./db --output=./a.html -`
`cat logfile | goaccess`**`--restore`**`--persist --db-path=./db --output=./a.html`**`-`**
But, In another way, **no error** occurs:
`mkdir db`
`echo -n '' | goaccess --persist --db-path=./db --output=./a.html -`
`goaccess`**`--restore`**`--persist --db-path=./db --output=./a.html`**`logfile`**
or
`mkdir db`
`echo -n '' | goaccess --persist --db-path=./db --output=./a.html -`
`cat logfile | goaccess --persist --db-path=./db --output=./a.html`**`-`**
So I found the code below, at `parse.c` around `2530`:
` ` `/* it's a pipe, then use the last parsed timestamp */`
` ` `ts = mktime (&logitem->dt);`
` ` `if (conf.restore && !glog->inode && last > 0 && last >= ts)`
` ` ` ` ` ` **`return 0;`**
Oops! Maybe should must be:
` ` `/* it's a pipe, then use the last parsed timestamp */`
` ` `ts = mktime (&logitem->dt);`
` ` `if (conf.restore && !glog->inode && last > 0 && last >= ts)`**`{`**
` ` ` ` ` ` **`ret = 0;`**
` ` ` ` ` ` **`goto cleanup;`**
` ` **`}`**
I thought and rethought several times ... so I removed the code, and it became:
` ` `ts = mktime (&logitem->dt);`
I believe that there should be no differences between `PIPE` and normal `LOG` file.
It is very common for me to apply a filter with `awk` and use the `PIPE`, even for old `LOGs` [ offline ].
I think that you should have used this conditional as an **optimization**, for **real-time** processing.
**PS**: In my `LOG` of 1.8 million records I had a leak of more than **2 Gigabytes** of memory.
All Right. See you soon.
|
process
|
memory leak for restore and input pipe fixed i tried hard to find this bug for weeks so simple but so difficult to find where i used several tools for that like valgrind and clang with addresssanitizer i managed to reduce my test to an log with lines and found the leak in record than occurs the error how to reproduce the problem mkdir db echo n goaccess persist db path db output a html cat logfile goaccess restore persist db path db output a html but in another way no error occurs mkdir db echo n goaccess persist db path db output a html goaccess restore persist db path db output a html logfile or mkdir db echo n goaccess persist db path db output a html cat logfile goaccess persist db path db output a html so i found the code below at parse c around it s a pipe then use the last parsed timestamp ts mktime logitem dt if conf restore glog inode last last ts return oops maybe should must be it s a pipe then use the last parsed timestamp ts mktime logitem dt if conf restore glog inode last last ts ret goto cleanup i thought and rethought several times so i removed the code and it became ts mktime logitem dt i believe that there should be no differences between pipe and normal log file it is very common for me to apply a filter with awk and use the pipe even for old logs i think that you should have used this conditional as an optimization for real time processing ps in my log of million records i had a leak of more than gigabytes of memory all right see you soon
| 1
|
19,712
| 26,053,759,394
|
IssuesEvent
|
2022-12-22 21:50:50
|
MPMG-DCC-UFMG/C01
|
https://api.github.com/repos/MPMG-DCC-UFMG/C01
|
opened
|
Interface de passos com Vue.js - Padronizar comportamentos com a versão antiga
|
[2] Baixa Prioridade [0] Desenvolvimento [1] Aprimoramento [3] Processamento Dinâmico
|
## Comportamento Esperado
A ideia original da refatoração da interface em Vue era replicar o comportamento da versão antiga.
## Comportamento Atual
Fora as diferenças documentadas nas demais issues sobre a interface de passos com Vue.js, temos quatro pontos principais de discrepância com a interface anterior restantes:
- Quando temos um "passo interno" (iterável dentro de uma iteração, por exemplo), a interface antiga exibe uma seta que permite ocultar e exibir os parâmetros desse passo. A versão nova não implementa esse comportamento.
- Na interface antiga, quando movemos o mouse sobre um passo, ele é destacado com uma borda azul e os botões de manipulação do passo (mover, duplicar e remover) são exibidos. Na versão nova o highlight não ocorre e os botões de manipulação do passo são sempre exibidos.
- Na interface antiga, quando movemos o mouse sobre um parâmetro opcional, o botão de remoção desse parâmetro é exibido. Na versão nova esse botão é sempre exibido.
- Na interface antiga, ao duplicar um passo com "filhos" (passos indentados), todos os filhos são duplicados, além do próprio passo. Esse comportamento não ocorre na versão nova. Porém, uma inconsistência é observada: ao deletar um passo com filhos na versão antiga, os filhos não são deletados. Acredito que se incluímos os filhos na duplicação faria sentido incluí-los na remoção.
É importante frisar que os itens descritos apenas explicam o funcionamento da interface anterior. Caso faça sentido, podemos ignorar a implementação dessas alterações.
## Sistema
Branch `issue-882`.
|
1.0
|
Interface de passos com Vue.js - Padronizar comportamentos com a versão antiga - ## Comportamento Esperado
A ideia original da refatoração da interface em Vue era replicar o comportamento da versão antiga.
## Comportamento Atual
Fora as diferenças documentadas nas demais issues sobre a interface de passos com Vue.js, temos quatro pontos principais de discrepância com a interface anterior restantes:
- Quando temos um "passo interno" (iterável dentro de uma iteração, por exemplo), a interface antiga exibe uma seta que permite ocultar e exibir os parâmetros desse passo. A versão nova não implementa esse comportamento.
- Na interface antiga, quando movemos o mouse sobre um passo, ele é destacado com uma borda azul e os botões de manipulação do passo (mover, duplicar e remover) são exibidos. Na versão nova o highlight não ocorre e os botões de manipulação do passo são sempre exibidos.
- Na interface antiga, quando movemos o mouse sobre um parâmetro opcional, o botão de remoção desse parâmetro é exibido. Na versão nova esse botão é sempre exibido.
- Na interface antiga, ao duplicar um passo com "filhos" (passos indentados), todos os filhos são duplicados, além do próprio passo. Esse comportamento não ocorre na versão nova. Porém, uma inconsistência é observada: ao deletar um passo com filhos na versão antiga, os filhos não são deletados. Acredito que se incluímos os filhos na duplicação faria sentido incluí-los na remoção.
É importante frisar que os itens descritos apenas explicam o funcionamento da interface anterior. Caso faça sentido, podemos ignorar a implementação dessas alterações.
## Sistema
Branch `issue-882`.
|
process
|
interface de passos com vue js padronizar comportamentos com a versão antiga comportamento esperado a ideia original da refatoração da interface em vue era replicar o comportamento da versão antiga comportamento atual fora as diferenças documentadas nas demais issues sobre a interface de passos com vue js temos quatro pontos principais de discrepância com a interface anterior restantes quando temos um passo interno iterável dentro de uma iteração por exemplo a interface antiga exibe uma seta que permite ocultar e exibir os parâmetros desse passo a versão nova não implementa esse comportamento na interface antiga quando movemos o mouse sobre um passo ele é destacado com uma borda azul e os botões de manipulação do passo mover duplicar e remover são exibidos na versão nova o highlight não ocorre e os botões de manipulação do passo são sempre exibidos na interface antiga quando movemos o mouse sobre um parâmetro opcional o botão de remoção desse parâmetro é exibido na versão nova esse botão é sempre exibido na interface antiga ao duplicar um passo com filhos passos indentados todos os filhos são duplicados além do próprio passo esse comportamento não ocorre na versão nova porém uma inconsistência é observada ao deletar um passo com filhos na versão antiga os filhos não são deletados acredito que se incluímos os filhos na duplicação faria sentido incluí los na remoção é importante frisar que os itens descritos apenas explicam o funcionamento da interface anterior caso faça sentido podemos ignorar a implementação dessas alterações sistema branch issue
| 1
|
16,303
| 20,960,721,682
|
IssuesEvent
|
2022-03-27 19:05:26
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add "Jacqueline"
|
suggested title in process
|
Title: Jacqueline
Type (film/tv show): film
Film or show in which it appears: Darling (1965), https://www.imdb.com/title/tt0059084/
Is the parent film/show streaming anywhere? The Criterion Channel, https://www.criterionchannel.com
About when in the parent film/show does it appear? ~37 min
Actual footage of the film/show can be seen (yes/no)? yes

|
1.0
|
Add "Jacqueline" - Title: Jacqueline
Type (film/tv show): film
Film or show in which it appears: Darling (1965), https://www.imdb.com/title/tt0059084/
Is the parent film/show streaming anywhere? The Criterion Channel, https://www.criterionchannel.com
About when in the parent film/show does it appear? ~37 min
Actual footage of the film/show can be seen (yes/no)? yes

|
process
|
add jacqueline title jacqueline type film tv show film film or show in which it appears darling is the parent film show streaming anywhere the criterion channel about when in the parent film show does it appear min actual footage of the film show can be seen yes no yes
| 1
|
23,361
| 16,091,576,004
|
IssuesEvent
|
2021-04-26 17:22:25
|
18F/aws-admin
|
https://api.github.com/repos/18F/aws-admin
|
closed
|
Enforce 2FA/MFA for AWS IAM jump account users
|
g: accepted i: infrastructure t: weeks
|
## Background information
Broken out from https://github.com/18F/aws-admin/issues/40
## Implementation steps
- [x] Add condition to all IAM policies to require MFA
## Acceptance criteria
- [ ] IAM users are unable to do anything besides set up MFA if they don't have MFA set up
- [ ] Documented process for handling users who lose their MFA
|
1.0
|
Enforce 2FA/MFA for AWS IAM jump account users - ## Background information
Broken out from https://github.com/18F/aws-admin/issues/40
## Implementation steps
- [x] Add condition to all IAM policies to require MFA
## Acceptance criteria
- [ ] IAM users are unable to do anything besides set up MFA if they don't have MFA set up
- [ ] Documented process for handling users who lose their MFA
|
non_process
|
enforce mfa for aws iam jump account users background information broken out from implementation steps add condition to all iam policies to require mfa acceptance criteria iam users are unable to do anything besides set up mfa if they don t have mfa set up documented process for handling users who lose their mfa
| 0
|
7,391
| 10,518,191,592
|
IssuesEvent
|
2019-09-29 09:02:12
|
SerialLain3170/ComputerVision-Papers
|
https://api.github.com/repos/SerialLain3170/ComputerVision-Papers
|
opened
|
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
|
Video Processing
|
# Peper
[Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis](https://arxiv.org/pdf/1909.12224.pdf)
# Summary
- Human Mesh Recovery, Flow Composition Module, Liquid Warping GANで構成
- Liquid Warping GANではsourceの情報をtargetに私ながらsource画像の詳細を残しながら生成を可能にしている
<img width="1190" alt="スクリーンショット 2019-09-29 17 59 14" src="https://user-images.githubusercontent.com/32360147/65829690-e260ed00-e2e2-11e9-9afb-333817687d52.png">
# Date
2019/09/27
|
1.0
|
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis - # Peper
[Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis](https://arxiv.org/pdf/1909.12224.pdf)
# Summary
- Human Mesh Recovery, Flow Composition Module, Liquid Warping GANで構成
- Liquid Warping GANではsourceの情報をtargetに私ながらsource画像の詳細を残しながら生成を可能にしている
<img width="1190" alt="スクリーンショット 2019-09-29 17 59 14" src="https://user-images.githubusercontent.com/32360147/65829690-e260ed00-e2e2-11e9-9afb-333817687d52.png">
# Date
2019/09/27
|
process
|
liquid warping gan a unified framework for human motion imitation appearance transfer and novel view synthesis peper summary human mesh recovery flow composition module liquid warping ganで構成 liquid warping ganではsourceの情報をtargetに私ながらsource画像の詳細を残しながら生成を可能にしている img width alt スクリーンショット src date
| 1
|
22,345
| 31,021,010,427
|
IssuesEvent
|
2023-08-10 05:20:31
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Add Python 3.9 tests
|
type: process
|
Choose a few libraries (mix of GAPIC, handwrittens) to add 3.9 tests before the 3.9.0 release is made ([release schedule](https://www.python.org/dev/peps/pep-0596/#id4)).
Steps:
* Add 3.9 to the [docker image](https://github.com/googleapis/testing-infra-docker/blob/4040f69470a80613a3e342031b78cca535d30a23/python/googleapis/python-multi/Dockerfile).
* Add 3.9 unit/system tests to a handful of libraries. The extra version can be added via templates ([see example](https://github.com/googleapis/python-texttospeech/blob/b68df446daa7983cad1d31553ece6df569c932b2/synth.py#L43-L48)).
CC @plamut
|
1.0
|
Add Python 3.9 tests - Choose a few libraries (mix of GAPIC, handwrittens) to add 3.9 tests before the 3.9.0 release is made ([release schedule](https://www.python.org/dev/peps/pep-0596/#id4)).
Steps:
* Add 3.9 to the [docker image](https://github.com/googleapis/testing-infra-docker/blob/4040f69470a80613a3e342031b78cca535d30a23/python/googleapis/python-multi/Dockerfile).
* Add 3.9 unit/system tests to a handful of libraries. The extra version can be added via templates ([see example](https://github.com/googleapis/python-texttospeech/blob/b68df446daa7983cad1d31553ece6df569c932b2/synth.py#L43-L48)).
CC @plamut
|
process
|
add python tests choose a few libraries mix of gapic handwrittens to add tests before the release is made steps add to the add unit system tests to a handful of libraries the extra version can be added via templates cc plamut
| 1
|
767,063
| 26,909,750,653
|
IssuesEvent
|
2023-02-06 22:20:35
|
workcraft/workcraft
|
https://api.github.com/repos/workcraft/workcraft
|
opened
|
Incorrect base for relative path when setCircuitEnvironment wrapper is called from JavaScript
|
bug priority:critical tag:core tag:model:circuit status:confirmed
|
If JavaScript uses `setCircuitEnvironment(circuitWork, envStg)` wrapper function with `envStg` parameter passed as a String path, then it is relative to the directory of Workcraft runner script. It would be more useful if this path was relative to the working directory (passed with -dir:PATH` command line option) or to the location of the `circuitWork` file.
(Note that when Workcraft is started via `gradlew` this `envStg` path is interpreted relative to the working directory, as one expect.)
A temporary workaround is to pass `envStg` as a Workspace Entry, e.g. as follows `setCircuitEnvironment(circuitWork, load(pathToStg))` --that always utilises a path relative to the working directory.
Probably the best option is to interpret `envStg` relative to the location of the `circuitWork` file, similar to how it is impleemnted in `setCircuitComponentRefinement(circuitWork, componentRef, refinementPath)` wrapper function.
|
1.0
|
Incorrect base for relative path when setCircuitEnvironment wrapper is called from JavaScript - If JavaScript uses `setCircuitEnvironment(circuitWork, envStg)` wrapper function with `envStg` parameter passed as a String path, then it is relative to the directory of Workcraft runner script. It would be more useful if this path was relative to the working directory (passed with -dir:PATH` command line option) or to the location of the `circuitWork` file.
(Note that when Workcraft is started via `gradlew` this `envStg` path is interpreted relative to the working directory, as one expect.)
A temporary workaround is to pass `envStg` as a Workspace Entry, e.g. as follows `setCircuitEnvironment(circuitWork, load(pathToStg))` --that always utilises a path relative to the working directory.
Probably the best option is to interpret `envStg` relative to the location of the `circuitWork` file, similar to how it is impleemnted in `setCircuitComponentRefinement(circuitWork, componentRef, refinementPath)` wrapper function.
|
non_process
|
incorrect base for relative path when setcircuitenvironment wrapper is called from javascript if javascript uses setcircuitenvironment circuitwork envstg wrapper function with envstg parameter passed as a string path then it is relative to the directory of workcraft runner script it would be more useful if this path was relative to the working directory passed with dir path command line option or to the location of the circuitwork file note that when workcraft is started via gradlew this envstg path is interpreted relative to the working directory as one expect a temporary workaround is to pass envstg as a workspace entry e g as follows setcircuitenvironment circuitwork load pathtostg that always utilises a path relative to the working directory probably the best option is to interpret envstg relative to the location of the circuitwork file similar to how it is impleemnted in setcircuitcomponentrefinement circuitwork componentref refinementpath wrapper function
| 0
|
51,248
| 10,602,183,600
|
IssuesEvent
|
2019-10-10 13:47:23
|
fac-17/My-Body-Back
|
https://api.github.com/repos/fac-17/My-Body-Back
|
closed
|
♻️ Use reacts power
|
CODE REVIEW
|
https://github.com/fac-17/My-Body-Back/blob/7db9eac41abbc2e77244bb6ce16876cfa4952383/src/components/NotesOfLove/NotesOfLove.js#L20-L29
React is an abstraction layer on top of vanilla dom manipulation. It streamlines a lot for us, and does lot of magic optimisation. We shouldn't be accessing the dom directly.
`className` is reacts way of passing classes to elements. This is where you should be adding/removing classes. You shouldn't be doing this yourself directly.
Can you think of any ways you could do this? Feel free to @ me if you get stuck.
|
1.0
|
♻️ Use reacts power - https://github.com/fac-17/My-Body-Back/blob/7db9eac41abbc2e77244bb6ce16876cfa4952383/src/components/NotesOfLove/NotesOfLove.js#L20-L29
React is an abstraction layer on top of vanilla dom manipulation. It streamlines a lot for us, and does lot of magic optimisation. We shouldn't be accessing the dom directly.
`className` is reacts way of passing classes to elements. This is where you should be adding/removing classes. You shouldn't be doing this yourself directly.
Can you think of any ways you could do this? Feel free to @ me if you get stuck.
|
non_process
|
♻️ use reacts power react is an abstraction layer on top of vanilla dom manipulation it streamlines a lot for us and does lot of magic optimisation we shouldn t be accessing the dom directly classname is reacts way of passing classes to elements this is where you should be adding removing classes you shouldn t be doing this yourself directly can you think of any ways you could do this feel free to me if you get stuck
| 0
|
2,664
| 5,436,716,103
|
IssuesEvent
|
2017-03-06 02:46:16
|
uccser/kordac
|
https://api.github.com/repos/uccser/kordac
|
closed
|
Implement {Scratch} codeblock processor
|
Django feature processor implementation testing
|
For the KordacResult the code-hash (named)tuples should be stored in a set with a custom comparison.
|
1.0
|
Implement {Scratch} codeblock processor - For the KordacResult the code-hash (named)tuples should be stored in a set with a custom comparison.
|
process
|
implement scratch codeblock processor for the kordacresult the code hash named tuples should be stored in a set with a custom comparison
| 1
|
19,104
| 25,156,314,809
|
IssuesEvent
|
2022-11-10 13:49:15
|
spring-projects/spring-hateoas
|
https://api.github.com/repos/spring-projects/spring-hateoas
|
closed
|
regardless of Accept Header controller produces HAL-Style links (since v1.3.x)
|
resolution: invalid process: waiting for feedback in: configuration
|
Since version **1.3.x** the Accept header is not respected for the EntityModel json. I could reproduce the issue using the basic example of [spring-projects/spring-hateoas-examples](https://github.com/sleicht/spring-hateoas-examples/tree/issue-1799)
```
### employees json
GET http://localhost:8080/employees
accept: application/json
### employees hal
GET http://localhost:8080/employees
accept: application/hal+json
```
Both calls return the same response with the same style of links: `"_links": {"self": {"href": "http://localhost:8080/employees"}}`
Expected would be this:
```
### employees json
GET http://localhost:8080/employees
accept: application/json
```
returns this self link: `"links": [{"rel": "self","href": "http://localhost:8080/employees"}]`
```
### employees hal
GET http://localhost:8080/employees
accept: application/hal+json
```
returns this self link: `"_links": {"self": {"href": "http://localhost:8080/employees"}}`
|
1.0
|
regardless of Accept Header controller produces HAL-Style links (since v1.3.x) - Since version **1.3.x** the Accept header is not respected for the EntityModel json. I could reproduce the issue using the basic example of [spring-projects/spring-hateoas-examples](https://github.com/sleicht/spring-hateoas-examples/tree/issue-1799)
```
### employees json
GET http://localhost:8080/employees
accept: application/json
### employees hal
GET http://localhost:8080/employees
accept: application/hal+json
```
Both calls return the same response with the same style of links: `"_links": {"self": {"href": "http://localhost:8080/employees"}}`
Expected would be this:
```
### employees json
GET http://localhost:8080/employees
accept: application/json
```
returns this self link: `"links": [{"rel": "self","href": "http://localhost:8080/employees"}]`
```
### employees hal
GET http://localhost:8080/employees
accept: application/hal+json
```
returns this self link: `"_links": {"self": {"href": "http://localhost:8080/employees"}}`
|
process
|
regardless of accept header controller produces hal style links since x since version x the accept header is not respected for the entitymodel json i could reproduce the issue using the basic example of employees json get accept application json employees hal get accept application hal json both calls return the same response with the same style of links links self href expected would be this employees json get accept application json returns this self link links employees hal get accept application hal json returns this self link links self href
| 1
|
32,259
| 15,299,677,662
|
IssuesEvent
|
2021-02-24 11:15:58
|
milvus-io/milvus
|
https://api.github.com/repos/milvus-io/milvus
|
closed
|
Search time increased too much if nq/top-k value greater than 1000
|
Bug Bug | Severity | S3 type/performance
|
**Please state your issue using the following template and, most importantly, in English.**
**Describe the bug**
Need to reduce the search time when both the top-k and nq is set greater than 1000.
```
11:10:25 │ Nq/Top-k │ 1 │ 10 │ 100 │ 1000 │
11:10:25 ├─────────────┼─────────────┼─────────────┼─────────────┼─────────────┤
11:10:25 │ 1 │ 0.9 │ 0.78 │ 0.92 │ 1.43 │
11:10:25 │ 10 │ 1.01 │ 1.06 │ 1.21 │ 1.4 │
11:10:25 │ 100 │ 2.05 │ 2.19 │ 2.2 │ 2.31 │
11:10:25 │ 1000 │ 2.49 │ 2.48 │ 2.26 │ 7.76 │
11:10:25 │ 1200 │ 2.68 │ 2.94 │ 2.48 │ 8.49 │
```
```
11:10:25 INFO:milvus_benchmark.client:Server command: mode, result: GPU
11:10:25 2020-09-14 03:10:25,122 - milvus_benchmark.client - INFO - Server command: build_commit_id, result: a8943f824cd5a14d5eae4b5b38b47dfb6d303223
```
```
[2020-09-14 11:22:24,594][DEBUG][SERVER][WaitFinish][reqsched_thread] scheduler job [81] {"id":81,"number_of_search_segment":500,"type":0} all done
[2020-09-14 11:22:24,594][DEBUG][SERVER][print][reqsched_thread] [CACHE CPU] [item count]: 1000, [usage] 7504MB, [capacity] 32768MB
[2020-09-14 11:22:24,594][DEBUG][SERVER][PrintTimeRecord][reqsched_thread] DBImpl::Query: Engine query totally cost (7.497037 seconds [7497.037412 ms])
[2020-09-14 11:22:24,596][DEBUG][SERVER][PrintTimeRecord][reqsched_thread] SearchReq(table=sift_50m_100000_128_l2: construct result and send (7.498369 seconds [7498.368631 ms])
[2020-09-14 11:22:24,596][DEBUG][SERVER][PrintTimeRecord][reqsched_thread] SearchReq(table=sift_50m_100000_128_l2: done (7.498433 seconds [7498.433381 ms])
```
**Steps/Code to reproduce behavior**
Follow this [guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) to craft a minimal bug report. This helps us reproduce the issue you're having and resolve the issue more quickly.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Method of installation**
- [ ] Docker/cpu
- [x] Docker/gpu
- [ ] Build from source
**Environment details**
- Hardware/Softwars conditions (OS, CPU, GPU, Memory)
- Milvus version (master or released version)
**Configuration file**
Settings you made in `server_config.yaml` or `milvus.yaml`
```yaml
paste-file-content-here
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
|
True
|
Search time increased too much if nq/top-k value greater than 1000 - **Please state your issue using the following template and, most importantly, in English.**
**Describe the bug**
Need to reduce the search time when both the top-k and nq is set greater than 1000.
```
11:10:25 │ Nq/Top-k │ 1 │ 10 │ 100 │ 1000 │
11:10:25 ├─────────────┼─────────────┼─────────────┼─────────────┼─────────────┤
11:10:25 │ 1 │ 0.9 │ 0.78 │ 0.92 │ 1.43 │
11:10:25 │ 10 │ 1.01 │ 1.06 │ 1.21 │ 1.4 │
11:10:25 │ 100 │ 2.05 │ 2.19 │ 2.2 │ 2.31 │
11:10:25 │ 1000 │ 2.49 │ 2.48 │ 2.26 │ 7.76 │
11:10:25 │ 1200 │ 2.68 │ 2.94 │ 2.48 │ 8.49 │
```
```
11:10:25 INFO:milvus_benchmark.client:Server command: mode, result: GPU
11:10:25 2020-09-14 03:10:25,122 - milvus_benchmark.client - INFO - Server command: build_commit_id, result: a8943f824cd5a14d5eae4b5b38b47dfb6d303223
```
```
[2020-09-14 11:22:24,594][DEBUG][SERVER][WaitFinish][reqsched_thread] scheduler job [81] {"id":81,"number_of_search_segment":500,"type":0} all done
[2020-09-14 11:22:24,594][DEBUG][SERVER][print][reqsched_thread] [CACHE CPU] [item count]: 1000, [usage] 7504MB, [capacity] 32768MB
[2020-09-14 11:22:24,594][DEBUG][SERVER][PrintTimeRecord][reqsched_thread] DBImpl::Query: Engine query totally cost (7.497037 seconds [7497.037412 ms])
[2020-09-14 11:22:24,596][DEBUG][SERVER][PrintTimeRecord][reqsched_thread] SearchReq(table=sift_50m_100000_128_l2: construct result and send (7.498369 seconds [7498.368631 ms])
[2020-09-14 11:22:24,596][DEBUG][SERVER][PrintTimeRecord][reqsched_thread] SearchReq(table=sift_50m_100000_128_l2: done (7.498433 seconds [7498.433381 ms])
```
**Steps/Code to reproduce behavior**
Follow this [guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) to craft a minimal bug report. This helps us reproduce the issue you're having and resolve the issue more quickly.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Method of installation**
- [ ] Docker/cpu
- [x] Docker/gpu
- [ ] Build from source
**Environment details**
- Hardware/Softwars conditions (OS, CPU, GPU, Memory)
- Milvus version (master or released version)
**Configuration file**
Settings you made in `server_config.yaml` or `milvus.yaml`
```yaml
paste-file-content-here
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
|
non_process
|
search time increased too much if nq top k value greater than please state your issue using the following template and most importantly in english describe the bug need to reduce the search time when both the top k and nq is set greater than │ nq top k │ │ │ │ │ ├─────────────┼─────────────┼─────────────┼─────────────┼─────────────┤ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ info milvus benchmark client server command mode result gpu milvus benchmark client info server command build commit id result scheduler job id number of search segment type all done dbimpl query engine query totally cost seconds searchreq table sift construct result and send seconds searchreq table sift done seconds steps code to reproduce behavior follow this to craft a minimal bug report this helps us reproduce the issue you re having and resolve the issue more quickly expected behavior a clear and concise description of what you expected to happen method of installation docker cpu docker gpu build from source environment details hardware softwars conditions os cpu gpu memory milvus version master or released version configuration file settings you made in server config yaml or milvus yaml yaml paste file content here screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here
| 0
|
16,978
| 22,338,350,903
|
IssuesEvent
|
2022-06-14 20:56:52
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
No update of environment variables in restored terminals.
|
bug api terminal-persistence terminal-process
|
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes (Only one extension that reproduces this issue is active)
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.65.0-insider
- OS Version:
Version: 1.65.0-insider
Commit: bb221a61d29deabd99ee9431736d04f2175cb596
Date: 2022-02-11T05:18:07.499Z
Electron: 16.0.8
Chromium: 96.0.4664.110
Node.js: 16.9.1
V8: 9.6.180.21-electron.0
OS: Linux x64 5.11.0-49-generic
Steps to Reproduce:
1. Have a simple extension that changes `context.environmentVariableCollection`, where `context` is `vscode.ExtensionContext`.
2. Open a terminal, a triangle that allows to relaunch the terminal is correctly displayed on the change of `environmentVariableCollection`.
3. Close VSCode and launch it again.
4. The terminal is restored with "Session contents restored from ..." message.
5. No triangle that allows to relaunch the terminal is ever displayed for this restored terminal when the `environmentVariableCollection` changes.
6. Open a new terminal
7. When the `environmentVariableCollection` changes, the relaunch triangle is displayed for the newly opened terminals only. I do not see a way how to reload environment variables in the restored terminals.
8. Screenshot that illustrates the issue:

|
1.0
|
No update of environment variables in restored terminals. - <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes (Only one extension that reproduces this issue is active)
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.65.0-insider
- OS Version:
Version: 1.65.0-insider
Commit: bb221a61d29deabd99ee9431736d04f2175cb596
Date: 2022-02-11T05:18:07.499Z
Electron: 16.0.8
Chromium: 96.0.4664.110
Node.js: 16.9.1
V8: 9.6.180.21-electron.0
OS: Linux x64 5.11.0-49-generic
Steps to Reproduce:
1. Have a simple extension that changes `context.environmentVariableCollection`, where `context` is `vscode.ExtensionContext`.
2. Open a terminal, a triangle that allows to relaunch the terminal is correctly displayed on the change of `environmentVariableCollection`.
3. Close VSCode and launch it again.
4. The terminal is restored with "Session contents restored from ..." message.
5. No triangle that allows to relaunch the terminal is ever displayed for this restored terminal when the `environmentVariableCollection` changes.
6. Open a new terminal
7. When the `environmentVariableCollection` changes, the relaunch triangle is displayed for the newly opened terminals only. I do not see a way how to reload environment variables in the restored terminals.
8. Screenshot that illustrates the issue:

|
process
|
no update of environment variables in restored terminals does this issue occur when all extensions are disabled yes only one extension that reproduces this issue is active report issue dialog can assist with this vs code version insider os version version insider commit date electron chromium node js electron os linux generic steps to reproduce have a simple extension that changes context environmentvariablecollection where context is vscode extensioncontext open a terminal a triangle that allows to relaunch the terminal is correctly displayed on the change of environmentvariablecollection close vscode and launch it again the terminal is restored with session contents restored from message no triangle that allows to relaunch the terminal is ever displayed for this restored terminal when the environmentvariablecollection changes open a new terminal when the environmentvariablecollection changes the relaunch triangle is displayed for the newly opened terminals only i do not see a way how to reload environment variables in the restored terminals screenshot that illustrates the issue
| 1
|
259,282
| 22,442,302,428
|
IssuesEvent
|
2022-06-21 02:45:32
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Tests ci-npd-e2e-test are failing
|
priority/important-soon sig/node kind/failing-test lifecycle/stale triage/accepted
|
### Which jobs are failing?
https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-test
### Which tests are failing?
ci-npd-e2e-test.Overall
** Ginkgo timed out waiting for all parallel procs to report back. **
### Since when has it been failing?
Since the very beginning I can see from the testgrid, maybe longer ago
02/01/2022 failed after 2m31s.
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-npd-e2e-test/1488568640893620224
### Testgrid link
https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-test
### Reason for failure (if possible)
Timeout
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig node
|
1.0
|
Tests ci-npd-e2e-test are failing - ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-test
### Which tests are failing?
ci-npd-e2e-test.Overall
** Ginkgo timed out waiting for all parallel procs to report back. **
### Since when has it been failing?
Since the very beginning I can see from the testgrid, maybe longer ago
02/01/2022 failed after 2m31s.
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-npd-e2e-test/1488568640893620224
### Testgrid link
https://testgrid.k8s.io/sig-node-node-problem-detector#ci-npd-e2e-test
### Reason for failure (if possible)
Timeout
### Anything else we need to know?
_No response_
### Relevant SIG(s)
/sig node
|
non_process
|
tests ci npd test are failing which jobs are failing which tests are failing ci npd test overall ginkgo timed out waiting for all parallel procs to report back since when has it been failing since the very beginning i can see from the testgrid maybe longer ago failed after testgrid link reason for failure if possible timeout anything else we need to know no response relevant sig s sig node
| 0
|
15,401
| 19,593,789,969
|
IssuesEvent
|
2022-01-05 15:39:11
|
googleapis/google-cloud-dotnet
|
https://api.github.com/repos/googleapis/google-cloud-dotnet
|
opened
|
Storage: Reactivate requester pays tests
|
priority: p1 type: process
|
We have some CI issues that are making these tests fail. We are looking at this internally, in the meantime, let's deactivate the tests so that our CI runs are back to green.
|
1.0
|
Storage: Reactivate requester pays tests - We have some CI issues that are making these tests fail. We are looking at this internally, in the meantime, let's deactivate the tests so that our CI runs are back to green.
|
process
|
storage reactivate requester pays tests we have some ci issues that are making these tests fail we are looking at this internally in the meantime let s deactivate the tests so that our ci runs are back to green
| 1
|
131,307
| 18,240,446,530
|
IssuesEvent
|
2021-10-01 12:17:18
|
artsking/linux-4.1.15_CVE-2020-15436_withPatch
|
https://api.github.com/repos/artsking/linux-4.1.15_CVE-2020-15436_withPatch
|
opened
|
CVE-2015-7990 (Medium) detected in linux-stable-rtv4.1.33
|
security vulnerability
|
## CVE-2015-7990 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-4.1.15_CVE-2020-15436_withPatch/commit/db7c8816f3b71fce406a1c19b4090da8f1ed8624">db7c8816f3b71fce406a1c19b4090da8f1ed8624</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/rds/connection.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Race condition in the rds_sendmsg function in net/rds/sendmsg.c in the Linux kernel before 4.3.3 allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by using a socket that was not properly bound. NOTE: this vulnerability exists because of an incomplete fix for CVE-2015-6937.
<p>Publish Date: 2015-12-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-7990>CVE-2015-7990</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2015-7990">https://www.linuxkernelcves.com/cves/CVE-2015-7990</a></p>
<p>Release Date: 2015-12-28</p>
<p>Fix Resolution: v4.4-rc4,v4.3.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-7990 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2015-7990 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/artsking/linux-4.1.15_CVE-2020-15436_withPatch/commit/db7c8816f3b71fce406a1c19b4090da8f1ed8624">db7c8816f3b71fce406a1c19b4090da8f1ed8624</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/rds/connection.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Race condition in the rds_sendmsg function in net/rds/sendmsg.c in the Linux kernel before 4.3.3 allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by using a socket that was not properly bound. NOTE: this vulnerability exists because of an incomplete fix for CVE-2015-6937.
<p>Publish Date: 2015-12-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-7990>CVE-2015-7990</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2015-7990">https://www.linuxkernelcves.com/cves/CVE-2015-7990</a></p>
<p>Release Date: 2015-12-28</p>
<p>Fix Resolution: v4.4-rc4,v4.3.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files net rds connection c vulnerability details race condition in the rds sendmsg function in net rds sendmsg c in the linux kernel before allows local users to cause a denial of service null pointer dereference and system crash or possibly have unspecified other impact by using a socket that was not properly bound note this vulnerability exists because of an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
10,214
| 26,529,667,095
|
IssuesEvent
|
2023-01-19 11:28:22
|
facebook/react-native
|
https://api.github.com/repos/facebook/react-native
|
opened
|
couldn't find DSO to load: libjscexecutor.so
|
Needs: Triage :mag: Type: New Architecture
|
### Description
E/SoLoader: couldn't find DSO to load: libhermes.so caused by: dlopen failed: cannot locate symbol "__emutls_get_address" referenced by "/data/app/~~fi0LO6D2MoTiy52TfVSjbA==/dk.ao.AO-KNgz-FvvkLLLxHpm6hQn1A==/lib/arm64/libfolly_runtime.so"... result: 0
D/AndroidRuntime: Shutting down VM
E/AndroidRuntime: FATAL EXCEPTION: main
Process: dk.ao.AO, PID: 13836
java.lang.UnsatisfiedLinkError: couldn't find DSO to load: libhermes.so caused by: dlopen failed: cannot locate symbol "__emutls_get_address" referenced by "/data/app/~~fi0LO6D2MoTiy52TfVSjbA==/dk.ao.AO-KNgz-FvvkLLLxHpm6hQn1A==/lib/arm64/libfolly_runtime.so"... result: 0
at com.facebook.soloader.SoLoader.doLoadLibraryBySoName(SoLoader.java:1127)
at com.facebook.soloader.SoLoader.loadLibraryBySoNameImpl(SoLoader.java:943)
at com.facebook.soloader.SoLoader.loadLibraryBySoName(SoLoader.java:855)
at com.facebook.soloader.SoLoader.loadLibrary(SoLoader.java:802)
at com.facebook.soloader.SoLoader.loadLibrary(SoLoader.java:772)
at com.facebook.hermes.reactexecutor.HermesExecutor.loadLibrary(HermesExecutor.java:26)
at com.facebook.hermes.reactexecutor.HermesExecutor.<clinit>(HermesExecutor.java:20)
at com.facebook.hermes.reactexecutor.HermesExecutor.loadLibrary(HermesExecutor.java:24)
at com.facebook.react.ReactInstanceManagerBuilder.getDefaultJSExecutorFactory(ReactInstanceManagerBuilder.java:369)
at com.facebook.react.ReactInstanceManagerBuilder.build(ReactInstanceManagerBuilder.java:316)
at com.facebook.react.ReactNativeHost.createReactInstanceManager(ReactNativeHost.java:94)
at com.facebook.react.ReactNativeHost.getReactInstanceManager(ReactNativeHost.java:41)
at dk.ao.AO.MainApplication.onCreate(MainApplication.java:60)
at android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1277)
at android.app.ActivityThread.handleBindApplication(ActivityThread.java:6759)
at android.app.ActivityThread.-$$Nest$mhandleBindApplication(Unknown Source:0)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2133)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loopOnce(Looper.java:201)
at android.os.Looper.loop(Looper.java:288)
at android.app.ActivityThread.main(ActivityThread.java:7872)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:548)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:936)
Caused by: java.lang.UnsatisfiedLinkError: dlopen failed: cannot locate symbol "__emutls_get_address" referenced by "/data/app/~~fi0LO6D2MoTiy52TfVSjbA==/dk.ao.AO-KNgz-FvvkLLLxHpm6hQn1A==/lib/arm64/libfolly_runtime.so"...
at java.lang.Runtime.load0(Runtime.java:929)
at java.lang.System.load(System.java:1625)
at com.facebook.soloader.SoLoader$1.load(SoLoader.java:558)
at com.facebook.soloader.DirectorySoSource.loadLibraryFrom(DirectorySoSource.java:110)
at com.facebook.soloader.DirectorySoSource.loadLibrary(DirectorySoSource.java:63)
at com.facebook.soloader.ApplicationSoSource.loadLibrary(ApplicationSoSource.java:91)
at com.facebook.soloader.SoLoader.doLoadLibraryBySoName(SoLoader.java:1067)
at com.facebook.soloader.SoLoader.loadLibraryBySoNameImpl(SoLoader.java:943)
at com.facebook.soloader.SoLoader.loadLibraryBySoName(SoLoader.java:855)
at com.facebook.soloader.SoLoader.loadLibrary(SoLoader.java:802)
at com.facebook.soloader.SoLoader.loadLibrary(SoLoader.java:772)
at com.facebook.hermes.reactexecutor.HermesExecutor.loadLibrary(HermesExecutor.java:26)
at com.facebook.hermes.reactexecutor.HermesExecutor.<clinit>(HermesExecutor.java:20)
at com.facebook.hermes.reactexecutor.HermesExecutor.loadLibrary(HermesExecutor.java:24)
at com.facebook.react.ReactInstanceManagerBuilder.getDefaultJSExecutorFactory(ReactInstanceManagerBuilder.java:369)
at com.facebook.react.ReactInstanceManagerBuilder.build(ReactInstanceManagerBuilder.java:316)
at com.facebook.react.ReactNativeHost.createReactInstanceManager(ReactNativeHost.java:94)
at com.facebook.react.ReactNativeHost.getReactInstanceManager(ReactNativeHost.java:41)
at dk.ao.AO.MainApplication.onCreate(MainApplication.java:60)
at android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1277)
at android.app.ActivityThread.handleBindApplication(ActivityThread.java:6759)
at android.app.ActivityThread.-$$Nest$mhandleBindApplication(Unknown Source:0)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2133)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loopOnce(Looper.java:201)
at android.os.Looper.loop(Looper.java:288)
at android.app.ActivityThread.main(ActivityThread.java:7872)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:548)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:936)
### Version
0.71.0
### Output of `npx react-native info`
System:
OS: macOS 13.1
CPU: (10) arm64 Apple M1 Pro
Memory: 70.36 MB / 16.00 GB
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 16.16.0 - ~/.nvm/versions/node/v16.16.0/bin/node
Yarn: 1.22.19 - /opt/homebrew/bin/yarn
npm: 8.11.0 - ~/.nvm/versions/node/v16.16.0/bin/npm
Watchman: Not Found
Managers:
CocoaPods: 1.11.3 - /opt/homebrew/lib/ruby/gems/2.7.0/bin/pod
SDKs:
iOS SDK:
Platforms: DriverKit 22.2, iOS 16.2, macOS 13.1, tvOS 16.1, watchOS 9.1
Android SDK: Not Found
IDEs:
Android Studio: 2021.3 AI-213.7172.25.2113.9123335
Xcode: 14.2/14C18 - /usr/bin/xcodebuild
Languages:
Java: 17.0.4 - /usr/bin/javac
npmPackages:
@react-native-community/cli: Not Found
react: 18.2.0 => 18.2.0
react-native: 0.71.0 => 0.71.0
react-native-macos: Not Found
npmGlobalPackages:
*react-native*: Not Found
### Steps to reproduce
When building towards Android no issues on IOS.
### Snack, code example, screenshot, or link to a repository
When building towards Android no issues on IOS.
|
1.0
|
couldn't find DSO to load: libjscexecutor.so - ### Description
E/SoLoader: couldn't find DSO to load: libhermes.so caused by: dlopen failed: cannot locate symbol "__emutls_get_address" referenced by "/data/app/~~fi0LO6D2MoTiy52TfVSjbA==/dk.ao.AO-KNgz-FvvkLLLxHpm6hQn1A==/lib/arm64/libfolly_runtime.so"... result: 0
D/AndroidRuntime: Shutting down VM
E/AndroidRuntime: FATAL EXCEPTION: main
Process: dk.ao.AO, PID: 13836
java.lang.UnsatisfiedLinkError: couldn't find DSO to load: libhermes.so caused by: dlopen failed: cannot locate symbol "__emutls_get_address" referenced by "/data/app/~~fi0LO6D2MoTiy52TfVSjbA==/dk.ao.AO-KNgz-FvvkLLLxHpm6hQn1A==/lib/arm64/libfolly_runtime.so"... result: 0
at com.facebook.soloader.SoLoader.doLoadLibraryBySoName(SoLoader.java:1127)
at com.facebook.soloader.SoLoader.loadLibraryBySoNameImpl(SoLoader.java:943)
at com.facebook.soloader.SoLoader.loadLibraryBySoName(SoLoader.java:855)
at com.facebook.soloader.SoLoader.loadLibrary(SoLoader.java:802)
at com.facebook.soloader.SoLoader.loadLibrary(SoLoader.java:772)
at com.facebook.hermes.reactexecutor.HermesExecutor.loadLibrary(HermesExecutor.java:26)
at com.facebook.hermes.reactexecutor.HermesExecutor.<clinit>(HermesExecutor.java:20)
at com.facebook.hermes.reactexecutor.HermesExecutor.loadLibrary(HermesExecutor.java:24)
at com.facebook.react.ReactInstanceManagerBuilder.getDefaultJSExecutorFactory(ReactInstanceManagerBuilder.java:369)
at com.facebook.react.ReactInstanceManagerBuilder.build(ReactInstanceManagerBuilder.java:316)
at com.facebook.react.ReactNativeHost.createReactInstanceManager(ReactNativeHost.java:94)
at com.facebook.react.ReactNativeHost.getReactInstanceManager(ReactNativeHost.java:41)
at dk.ao.AO.MainApplication.onCreate(MainApplication.java:60)
at android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1277)
at android.app.ActivityThread.handleBindApplication(ActivityThread.java:6759)
at android.app.ActivityThread.-$$Nest$mhandleBindApplication(Unknown Source:0)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2133)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loopOnce(Looper.java:201)
at android.os.Looper.loop(Looper.java:288)
at android.app.ActivityThread.main(ActivityThread.java:7872)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:548)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:936)
Caused by: java.lang.UnsatisfiedLinkError: dlopen failed: cannot locate symbol "__emutls_get_address" referenced by "/data/app/~~fi0LO6D2MoTiy52TfVSjbA==/dk.ao.AO-KNgz-FvvkLLLxHpm6hQn1A==/lib/arm64/libfolly_runtime.so"...
at java.lang.Runtime.load0(Runtime.java:929)
at java.lang.System.load(System.java:1625)
at com.facebook.soloader.SoLoader$1.load(SoLoader.java:558)
at com.facebook.soloader.DirectorySoSource.loadLibraryFrom(DirectorySoSource.java:110)
at com.facebook.soloader.DirectorySoSource.loadLibrary(DirectorySoSource.java:63)
at com.facebook.soloader.ApplicationSoSource.loadLibrary(ApplicationSoSource.java:91)
at com.facebook.soloader.SoLoader.doLoadLibraryBySoName(SoLoader.java:1067)
at com.facebook.soloader.SoLoader.loadLibraryBySoNameImpl(SoLoader.java:943)
at com.facebook.soloader.SoLoader.loadLibraryBySoName(SoLoader.java:855)
at com.facebook.soloader.SoLoader.loadLibrary(SoLoader.java:802)
at com.facebook.soloader.SoLoader.loadLibrary(SoLoader.java:772)
at com.facebook.hermes.reactexecutor.HermesExecutor.loadLibrary(HermesExecutor.java:26)
at com.facebook.hermes.reactexecutor.HermesExecutor.<clinit>(HermesExecutor.java:20)
at com.facebook.hermes.reactexecutor.HermesExecutor.loadLibrary(HermesExecutor.java:24)
at com.facebook.react.ReactInstanceManagerBuilder.getDefaultJSExecutorFactory(ReactInstanceManagerBuilder.java:369)
at com.facebook.react.ReactInstanceManagerBuilder.build(ReactInstanceManagerBuilder.java:316)
at com.facebook.react.ReactNativeHost.createReactInstanceManager(ReactNativeHost.java:94)
at com.facebook.react.ReactNativeHost.getReactInstanceManager(ReactNativeHost.java:41)
at dk.ao.AO.MainApplication.onCreate(MainApplication.java:60)
at android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1277)
at android.app.ActivityThread.handleBindApplication(ActivityThread.java:6759)
at android.app.ActivityThread.-$$Nest$mhandleBindApplication(Unknown Source:0)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2133)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loopOnce(Looper.java:201)
at android.os.Looper.loop(Looper.java:288)
at android.app.ActivityThread.main(ActivityThread.java:7872)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:548)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:936)
### Version
0.71.0
### Output of `npx react-native info`
System:
OS: macOS 13.1
CPU: (10) arm64 Apple M1 Pro
Memory: 70.36 MB / 16.00 GB
Shell: 5.8.1 - /bin/zsh
Binaries:
Node: 16.16.0 - ~/.nvm/versions/node/v16.16.0/bin/node
Yarn: 1.22.19 - /opt/homebrew/bin/yarn
npm: 8.11.0 - ~/.nvm/versions/node/v16.16.0/bin/npm
Watchman: Not Found
Managers:
CocoaPods: 1.11.3 - /opt/homebrew/lib/ruby/gems/2.7.0/bin/pod
SDKs:
iOS SDK:
Platforms: DriverKit 22.2, iOS 16.2, macOS 13.1, tvOS 16.1, watchOS 9.1
Android SDK: Not Found
IDEs:
Android Studio: 2021.3 AI-213.7172.25.2113.9123335
Xcode: 14.2/14C18 - /usr/bin/xcodebuild
Languages:
Java: 17.0.4 - /usr/bin/javac
npmPackages:
@react-native-community/cli: Not Found
react: 18.2.0 => 18.2.0
react-native: 0.71.0 => 0.71.0
react-native-macos: Not Found
npmGlobalPackages:
*react-native*: Not Found
### Steps to reproduce
When building towards Android no issues on IOS.
### Snack, code example, screenshot, or link to a repository
When building towards Android no issues on IOS.
|
non_process
|
couldn t find dso to load libjscexecutor so description e soloader couldn t find dso to load libhermes so caused by dlopen failed cannot locate symbol emutls get address referenced by data app dk ao ao kngz lib libfolly runtime so result d androidruntime shutting down vm e androidruntime fatal exception main process dk ao ao pid java lang unsatisfiedlinkerror couldn t find dso to load libhermes so caused by dlopen failed cannot locate symbol emutls get address referenced by data app dk ao ao kngz lib libfolly runtime so result at com facebook soloader soloader doloadlibrarybysoname soloader java at com facebook soloader soloader loadlibrarybysonameimpl soloader java at com facebook soloader soloader loadlibrarybysoname soloader java at com facebook soloader soloader loadlibrary soloader java at com facebook soloader soloader loadlibrary soloader java at com facebook hermes reactexecutor hermesexecutor loadlibrary hermesexecutor java at com facebook hermes reactexecutor hermesexecutor hermesexecutor java at com facebook hermes reactexecutor hermesexecutor loadlibrary hermesexecutor java at com facebook react reactinstancemanagerbuilder getdefaultjsexecutorfactory reactinstancemanagerbuilder java at com facebook react reactinstancemanagerbuilder build reactinstancemanagerbuilder java at com facebook react reactnativehost createreactinstancemanager reactnativehost java at com facebook react reactnativehost getreactinstancemanager reactnativehost java at dk ao ao mainapplication oncreate mainapplication java at android app instrumentation callapplicationoncreate instrumentation java at android app activitythread handlebindapplication activitythread java at android app activitythread nest mhandlebindapplication unknown source at android app activitythread h handlemessage activitythread java at android os handler dispatchmessage handler java at android os looper looponce looper java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java at com android internal os zygoteinit main zygoteinit java caused by java lang unsatisfiedlinkerror dlopen failed cannot locate symbol emutls get address referenced by data app dk ao ao kngz lib libfolly runtime so at java lang runtime runtime java at java lang system load system java at com facebook soloader soloader load soloader java at com facebook soloader directorysosource loadlibraryfrom directorysosource java at com facebook soloader directorysosource loadlibrary directorysosource java at com facebook soloader applicationsosource loadlibrary applicationsosource java at com facebook soloader soloader doloadlibrarybysoname soloader java at com facebook soloader soloader loadlibrarybysonameimpl soloader java at com facebook soloader soloader loadlibrarybysoname soloader java at com facebook soloader soloader loadlibrary soloader java at com facebook soloader soloader loadlibrary soloader java at com facebook hermes reactexecutor hermesexecutor loadlibrary hermesexecutor java at com facebook hermes reactexecutor hermesexecutor hermesexecutor java at com facebook hermes reactexecutor hermesexecutor loadlibrary hermesexecutor java at com facebook react reactinstancemanagerbuilder getdefaultjsexecutorfactory reactinstancemanagerbuilder java at com facebook react reactinstancemanagerbuilder build reactinstancemanagerbuilder java at com facebook react reactnativehost createreactinstancemanager reactnativehost java at com facebook react reactnativehost getreactinstancemanager reactnativehost java at dk ao ao mainapplication oncreate mainapplication java at android app instrumentation callapplicationoncreate instrumentation java at android app activitythread handlebindapplication activitythread java at android app activitythread nest mhandlebindapplication unknown source at android app activitythread h handlemessage activitythread java at android os handler dispatchmessage handler java at android os looper looponce looper java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invoke native method at com android internal os runtimeinit methodandargscaller run runtimeinit java at com android internal os zygoteinit main zygoteinit java version output of npx react native info system os macos cpu apple pro memory mb gb shell bin zsh binaries node nvm versions node bin node yarn opt homebrew bin yarn npm nvm versions node bin npm watchman not found managers cocoapods opt homebrew lib ruby gems bin pod sdks ios sdk platforms driverkit ios macos tvos watchos android sdk not found ides android studio ai xcode usr bin xcodebuild languages java usr bin javac npmpackages react native community cli not found react react native react native macos not found npmglobalpackages react native not found steps to reproduce when building towards android no issues on ios snack code example screenshot or link to a repository when building towards android no issues on ios
| 0
|
4,530
| 7,372,097,598
|
IssuesEvent
|
2018-03-13 13:53:21
|
w3c/transitions
|
https://api.github.com/repos/w3c/transitions
|
opened
|
Document First Public Candidate Recommendation
|
Process Issue
|
Some gets confused and don't realize First Public Candidate Recommendation is "FPWD+CR"
|
1.0
|
Document First Public Candidate Recommendation - Some gets confused and don't realize First Public Candidate Recommendation is "FPWD+CR"
|
process
|
document first public candidate recommendation some gets confused and don t realize first public candidate recommendation is fpwd cr
| 1
|
7,150
| 10,293,265,427
|
IssuesEvent
|
2019-08-27 16:01:13
|
heim-rs/heim
|
https://api.github.com/repos/heim-rs/heim
|
closed
|
Convert fs errors into process errors for Linux implementation of heim-process
|
A-process A-runtime C-bug O-linux
|
fs routines are returning `heim::Error`, which hides the `std::io::Error`, but for Linux implementation it seems important now to handle "file not found" cases and convert them into `ProcessError::NoSuchProcess` error.
|
1.0
|
Convert fs errors into process errors for Linux implementation of heim-process - fs routines are returning `heim::Error`, which hides the `std::io::Error`, but for Linux implementation it seems important now to handle "file not found" cases and convert them into `ProcessError::NoSuchProcess` error.
|
process
|
convert fs errors into process errors for linux implementation of heim process fs routines are returning heim error which hides the std io error but for linux implementation it seems important now to handle file not found cases and convert them into processerror nosuchprocess error
| 1
|
583,282
| 17,381,449,643
|
IssuesEvent
|
2021-07-31 20:00:46
|
hackforla/tdm-calculator
|
https://api.github.com/repos/hackforla/tdm-calculator
|
opened
|
Notify and Instruct Users on Final Summary page Target points are not met
|
level: easy p-Feature - Final Summary Page priority: MUST HAVE role: ui/ux
|
### Overview
Users need to be notified and instructed on the final page (Calculation/6) if they do not meet their target points.
### Action Items
Design
- [ ] Create Mock-up (or mock-ups) showing new notification on Final page
- [ ] Come up with best wording to notify users they have not meet their target points and can return to select more strategies
- [ ] Keep formatting and color/style in line with other notifications on page
- [ ] Show design or designs to product team for visual review
### Resources/Instructions
Current page when users do not meet target points

Current page when users do not meet target points
![Uploading Screen Shot 2021-07-31 at 12.53.43 PM.png…]()
|
1.0
|
Notify and Instruct Users on Final Summary page Target points are not met - ### Overview
Users need to be notified and instructed on the final page (Calculation/6) if they do not meet their target points.
### Action Items
Design
- [ ] Create Mock-up (or mock-ups) showing new notification on Final page
- [ ] Come up with best wording to notify users they have not meet their target points and can return to select more strategies
- [ ] Keep formatting and color/style in line with other notifications on page
- [ ] Show design or designs to product team for visual review
### Resources/Instructions
Current page when users do not meet target points

Current page when users do not meet target points
![Uploading Screen Shot 2021-07-31 at 12.53.43 PM.png…]()
|
non_process
|
notify and instruct users on final summary page target points are not met overview users need to be notified and instructed on the final page calculation if they do not meet their target points action items design create mock up or mock ups showing new notification on final page come up with best wording to notify users they have not meet their target points and can return to select more strategies keep formatting and color style in line with other notifications on page show design or designs to product team for visual review resources instructions current page when users do not meet target points current page when users do not meet target points
| 0
|
177,685
| 14,642,298,000
|
IssuesEvent
|
2020-12-25 10:26:28
|
jQAssistant/jqa-java-plugin
|
https://api.github.com/repos/jQAssistant/jqa-java-plugin
|
opened
|
Migrate the Plugin Documentation of the Java Plugin to the new Reference Manual
|
topic:documentation type:task
|
### Task Description
Migrate the existing documentation of the Java Plugin to the new reference manul.
### Definition of Done for the Implementers
- [ ] The reference manual contains the same information as the old user manual
- [ ] The section on the Java Plugin in the old user manual points to the section for the plugin in the reference manual
|
1.0
|
Migrate the Plugin Documentation of the Java Plugin to the new Reference Manual - ### Task Description
Migrate the existing documentation of the Java Plugin to the new reference manul.
### Definition of Done for the Implementers
- [ ] The reference manual contains the same information as the old user manual
- [ ] The section on the Java Plugin in the old user manual points to the section for the plugin in the reference manual
|
non_process
|
migrate the plugin documentation of the java plugin to the new reference manual task description migrate the existing documentation of the java plugin to the new reference manul definition of done for the implementers the reference manual contains the same information as the old user manual the section on the java plugin in the old user manual points to the section for the plugin in the reference manual
| 0
|
185,488
| 14,356,713,368
|
IssuesEvent
|
2020-11-30 11:59:14
|
NationalSecurityAgency/skills-service
|
https://api.github.com/repos/NationalSecurityAgency/skills-service
|
closed
|
Dashboard falsely reports badge as disabled if all skills are removed
|
bug test
|
To reproduce,
1. create a badge (global or project).
1. add a skill to the badge
1. publish the badge
1. Manage badge, remove skill (there should be zero skills on the badge now)
1. Badge should report that it is Live not Disabled
(note that this bug is also present in Global Badges that have zero skills but have project levels)
|
1.0
|
Dashboard falsely reports badge as disabled if all skills are removed - To reproduce,
1. create a badge (global or project).
1. add a skill to the badge
1. publish the badge
1. Manage badge, remove skill (there should be zero skills on the badge now)
1. Badge should report that it is Live not Disabled
(note that this bug is also present in Global Badges that have zero skills but have project levels)
|
non_process
|
dashboard falsely reports badge as disabled if all skills are removed to reproduce create a badge global or project add a skill to the badge publish the badge manage badge remove skill there should be zero skills on the badge now badge should report that it is live not disabled note that this bug is also present in global badges that have zero skills but have project levels
| 0
|
22,132
| 30,679,144,326
|
IssuesEvent
|
2023-07-26 07:59:06
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
More examples
|
automation/svc triaged assigned-to-author doc-enhancement process-automation/subsvc Pri2
|
Please provide an example of using a runbook to create a new container instance by calling a resource manager template from a storage account
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d4ea5e93-b220-57c1-b011-865b7da3c5a6
* Version Independent ID: 179a79da-0e12-716e-725e-9db6b8f20282
* Content: [Deploy an Azure Resource Manager template in an Azure Automation runbook](https://docs.microsoft.com/en-us/azure/automation/automation-deploy-template-runbook#feedback)
* Content Source: [articles/automation/automation-deploy-template-runbook.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-deploy-template-runbook.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
1.0
|
More examples - Please provide an example of using a runbook to create a new container instance by calling a resource manager template from a storage account
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d4ea5e93-b220-57c1-b011-865b7da3c5a6
* Version Independent ID: 179a79da-0e12-716e-725e-9db6b8f20282
* Content: [Deploy an Azure Resource Manager template in an Azure Automation runbook](https://docs.microsoft.com/en-us/azure/automation/automation-deploy-template-runbook#feedback)
* Content Source: [articles/automation/automation-deploy-template-runbook.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-deploy-template-runbook.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
process
|
more examples please provide an example of using a runbook to create a new container instance by calling a resource manager template from a storage account document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
| 1
|
64,775
| 6,921,904,308
|
IssuesEvent
|
2017-11-30 00:06:48
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
soak test fails consistently, "regular resource usage tracking resource tracking for 0 pods per node"
|
kind/bug kind/e2e-test-failure priority/important-soon sig/node
|
https://k8s-testgrid.appspot.com/release-1.7-blocking#soak-gci-gce-1-7-test
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Jun 9 03:35:17.488: Memory usage exceeding limits:
node bootstrap-e2e-minion-group-rkj5:
container "kubelet": expected RSS memory (MB) < 73400320; got 138641408
node bootstrap-e2e-minion-group-t7nn:
container "kubelet": expected RSS memory (MB) < 73400320; got 147742720
node bootstrap-e2e-minion-group-wkq3:
container "kubelet": expected RSS memory (MB) < 73400320; got 147116032
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155
```
@dchen1107 @ericchiang @yujuhong @krzyzacy @kubernetes/kubernetes-release-managers
|
1.0
|
soak test fails consistently, "regular resource usage tracking resource tracking for 0 pods per node" - https://k8s-testgrid.appspot.com/release-1.7-blocking#soak-gci-gce-1-7-test
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:271
Jun 9 03:35:17.488: Memory usage exceeding limits:
node bootstrap-e2e-minion-group-rkj5:
container "kubelet": expected RSS memory (MB) < 73400320; got 138641408
node bootstrap-e2e-minion-group-t7nn:
container "kubelet": expected RSS memory (MB) < 73400320; got 147742720
node bootstrap-e2e-minion-group-wkq3:
container "kubelet": expected RSS memory (MB) < 73400320; got 147116032
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:155
```
@dchen1107 @ericchiang @yujuhong @krzyzacy @kubernetes/kubernetes-release-managers
|
non_process
|
soak test fails consistently regular resource usage tracking resource tracking for pods per node go src io kubernetes output dockerized go src io kubernetes test kubelet perf go jun memory usage exceeding limits node bootstrap minion group container kubelet expected rss memory mb got node bootstrap minion group container kubelet expected rss memory mb got node bootstrap minion group container kubelet expected rss memory mb got go src io kubernetes output dockerized go src io kubernetes test kubelet perf go ericchiang yujuhong krzyzacy kubernetes kubernetes release managers
| 0
|
311,757
| 23,403,782,071
|
IssuesEvent
|
2022-08-12 10:40:24
|
Gepardec/containerization-training
|
https://api.github.com/repos/Gepardec/containerization-training
|
opened
|
Add this training to Gepardec/train by replacing the existing one which has been refactored
|
documentation enhancement
|
The bootstrap.sh shall be part of our Gepardec/train repository, which holds all of our trainings which can be provisioned in AWS.
Document the training in the github pages of Gepardec/train, sand replace the existing containerization documentation, because it has changed.
|
1.0
|
Add this training to Gepardec/train by replacing the existing one which has been refactored - The bootstrap.sh shall be part of our Gepardec/train repository, which holds all of our trainings which can be provisioned in AWS.
Document the training in the github pages of Gepardec/train, sand replace the existing containerization documentation, because it has changed.
|
non_process
|
add this training to gepardec train by replacing the existing one which has been refactored the bootstrap sh shall be part of our gepardec train repository which holds all of our trainings which can be provisioned in aws document the training in the github pages of gepardec train sand replace the existing containerization documentation because it has changed
| 0
|
137,814
| 12,795,564,204
|
IssuesEvent
|
2020-07-02 08:57:58
|
statisticsnorway/klass-subsets-api
|
https://api.github.com/repos/statisticsnorway/klass-subsets-api
|
closed
|
API Guide
|
documentation enhancement
|
**Is your feature request related to a problem? Please describe.**
Developers and users have to have exhaustive API documentation at any point in time.
**Describe the solution you'd like**
Suggest to use AsciiDoc Maven plugin (check out the implementation for Klass API project https://github.com/statisticsnorway/klass)
**Describe alternatives you've considered**
I'm open to any other automated API doc generators.
|
1.0
|
API Guide - **Is your feature request related to a problem? Please describe.**
Developers and users have to have exhaustive API documentation at any point in time.
**Describe the solution you'd like**
Suggest to use AsciiDoc Maven plugin (check out the implementation for Klass API project https://github.com/statisticsnorway/klass)
**Describe alternatives you've considered**
I'm open to any other automated API doc generators.
|
non_process
|
api guide is your feature request related to a problem please describe developers and users have to have exhaustive api documentation at any point in time describe the solution you d like suggest to use asciidoc maven plugin check out the implementation for klass api project describe alternatives you ve considered i m open to any other automated api doc generators
| 0
|
5,374
| 8,202,754,189
|
IssuesEvent
|
2018-09-02 13:29:15
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
reopened
|
Notification-line: project assignee notification problem
|
Process bug
|
@abrahamos
open new project.
assign yourself as admin.
the notification message wrong.

|
1.0
|
Notification-line: project assignee notification problem - @abrahamos
open new project.
assign yourself as admin.
the notification message wrong.

|
process
|
notification line project assignee notification problem abrahamos open new project assign yourself as admin the notification message wrong
| 1
|
13,597
| 16,174,846,256
|
IssuesEvent
|
2021-05-03 04:00:31
|
e4exp/paper_manager_abstract
|
https://api.github.com/repos/e4exp/paper_manager_abstract
|
reopened
|
Single Headed Attention RNN: Stop Thinking With Your Head
|
2019 Natural Language Processing Recurrent Neural Network
|
- https://arxiv.org/abs/1911.11423
- 2019
言語モデリングの主要なアプローチは、私が少年時代に見たテレビ番組、すなわちトランスフォーマーやセサミストリートに夢中になっています。
これもトランスフォーマー、あれもトランスフォーマー、そしてここにはGPU-TPU-ニューロモルフィックのウェハースケールシリコンの焚き火のようなものがある。
私たちは、暗号に触発された派手な頭字語である「Single Headed Attention RNN(SHA-RNN)」を使って、古くて実績のある技術を使うという怠惰な道を選んだ。
著者の唯一の目標は、もし我々が代わりにわずかに異なる頭字語とわずかに異なる結果に執着していたならば、この分野全体が異なる方向に進化していたかもしれないことを示すことです。
退屈なLSTMだけをベースにした以前は強力だった言語モデルを、enwik8での最先端のバイトレベルの言語モデルの結果と比べて、石の投げられる距離内に収めることができました。
この作業では、集中的なハイパーパラメータの最適化は行わず、著者の小さなワンルームアパートをサンフランシスコの夏の真っ只中にあまりにも暖かくしてくれたコモディティデスクトップマシン上で完全に生活しました。
最終的な結果は、筆者がせっかちなこともあり、1つのGPUでプラスマイナス24時間で達成可能です。
このアテンション・メカニズムは、最小限の計算で大規模なコンテクストにも容易に拡張できます。
セサミストリートのように。
|
1.0
|
Single Headed Attention RNN: Stop Thinking With Your Head - - https://arxiv.org/abs/1911.11423
- 2019
言語モデリングの主要なアプローチは、私が少年時代に見たテレビ番組、すなわちトランスフォーマーやセサミストリートに夢中になっています。
これもトランスフォーマー、あれもトランスフォーマー、そしてここにはGPU-TPU-ニューロモルフィックのウェハースケールシリコンの焚き火のようなものがある。
私たちは、暗号に触発された派手な頭字語である「Single Headed Attention RNN(SHA-RNN)」を使って、古くて実績のある技術を使うという怠惰な道を選んだ。
著者の唯一の目標は、もし我々が代わりにわずかに異なる頭字語とわずかに異なる結果に執着していたならば、この分野全体が異なる方向に進化していたかもしれないことを示すことです。
退屈なLSTMだけをベースにした以前は強力だった言語モデルを、enwik8での最先端のバイトレベルの言語モデルの結果と比べて、石の投げられる距離内に収めることができました。
この作業では、集中的なハイパーパラメータの最適化は行わず、著者の小さなワンルームアパートをサンフランシスコの夏の真っ只中にあまりにも暖かくしてくれたコモディティデスクトップマシン上で完全に生活しました。
最終的な結果は、筆者がせっかちなこともあり、1つのGPUでプラスマイナス24時間で達成可能です。
このアテンション・メカニズムは、最小限の計算で大規模なコンテクストにも容易に拡張できます。
セサミストリートのように。
|
process
|
single headed attention rnn stop thinking with your head 言語モデリングの主要なアプローチは、私が少年時代に見たテレビ番組、すなわちトランスフォーマーやセサミストリートに夢中になっています。 これもトランスフォーマー、あれもトランスフォーマー、そしてここにはgpu tpu ニューロモルフィックのウェハースケールシリコンの焚き火のようなものがある。 私たちは、暗号に触発された派手な頭字語である「single headed attention rnn(sha rnn)」を使って、古くて実績のある技術を使うという怠惰な道を選んだ。 著者の唯一の目標は、もし我々が代わりにわずかに異なる頭字語とわずかに異なる結果に執着していたならば、この分野全体が異なる方向に進化していたかもしれないことを示すことです。 退屈なlstmだけをベースにした以前は強力だった言語モデルを、 、石の投げられる距離内に収めることができました。 この作業では、集中的なハイパーパラメータの最適化は行わず、著者の小さなワンルームアパートをサンフランシスコの夏の真っ只中にあまりにも暖かくしてくれたコモディティデスクトップマシン上で完全に生活しました。 最終的な結果は、筆者がせっかちなこともあり、 。 このアテンション・メカニズムは、最小限の計算で大規模なコンテクストにも容易に拡張できます。 セサミストリートのように。
| 1
|
827,874
| 31,801,027,819
|
IssuesEvent
|
2023-09-13 11:07:08
|
OpenSpace/OpenSpace
|
https://api.github.com/repos/OpenSpace/OpenSpace
|
reopened
|
Night Sky actionissue in dome
|
Type: Bug Priority: Major Component: Core
|
The action to show Cardinal directions is local and only shows on master and not on the dome.
|
1.0
|
Night Sky actionissue in dome - The action to show Cardinal directions is local and only shows on master and not on the dome.
|
non_process
|
night sky actionissue in dome the action to show cardinal directions is local and only shows on master and not on the dome
| 0
|
21,624
| 30,022,826,708
|
IssuesEvent
|
2023-06-27 02:00:10
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Tue, 27 Jun 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Masked conditional variational autoencoders for chromosome straightening
- **Authors:** Jingxiong Li, Sunyi Zheng, Zhongyi Shui, Shichuan Zhang, Linyi Yang, Yuxuan Sun, Yunlong Zhang, Honglin Li, Yuanxin Ye, Peter M.A. van Ooijen, Kang Li, Lin Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.14129
- **Pdf link:** https://arxiv.org/pdf/2306.14129
- **Abstract**
Karyotyping is of importance for detecting chromosomal aberrations in human disease. However, chromosomes easily appear curved in microscopic images, which prevents cytogeneticists from analyzing chromosome types. To address this issue, we propose a framework for chromosome straightening, which comprises a preliminary processing algorithm and a generative model called masked conditional variational autoencoders (MC-VAE). The processing method utilizes patch rearrangement to address the difficulty in erasing low degrees of curvature, providing reasonable preliminary results for the MC-VAE. The MC-VAE further straightens the results by leveraging chromosome patches conditioned on their curvatures to learn the mapping between banding patterns and conditions. During model training, we apply a masking strategy with a high masking ratio to train the MC-VAE with eliminated redundancy. This yields a non-trivial reconstruction task, allowing the model to effectively preserve chromosome banding patterns and structure details in the reconstructed results. Extensive experiments on three public datasets with two stain styles show that our framework surpasses the performance of state-of-the-art methods in retaining banding patterns and structure details. Compared to using real-world bent chromosomes, the use of high-quality straightened chromosomes generated by our proposed method can improve the performance of various deep learning models for chromosome classification by a large margin. Such a straightening approach has the potential to be combined with other karyotyping systems to assist cytogeneticists in chromosome analysis.
### Video object detection for privacy-preserving patient monitoring in intensive care
- **Authors:** Raphael Emberger (1), Jens Michael Boss (2), Daniel Baumann (2), Marko Seric (2), Shufan Huo (2 and 3), Lukas Tuggener (1), Emanuela Keller (2), Thilo Stadelmann (1 and 4) ((1) Centre for Artificial Intelligence, ZHAW School of Engineering, Winterthur, Switzerland, (2) Neurocritical Care Unit, Department of Neurosurgery and Institute of Intensive Care Medicine, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Switzerland, (3) Neurology, Charité - University Medicine Berlin, Berlin, Germany, (4) European Centre for Living Technology (ECLT), Ca' Bottacin, Venice, Italy)
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2306.14620
- **Pdf link:** https://arxiv.org/pdf/2306.14620
- **Abstract**
Patient monitoring in intensive care units, although assisted by biosensors, needs continuous supervision of staff. To reduce the burden on staff members, IT infrastructures are built to record monitoring data and develop clinical decision support systems. These systems, however, are vulnerable to artifacts (e.g. muscle movement due to ongoing treatment), which are often indistinguishable from real and potentially dangerous signals. Video recordings could facilitate the reliable classification of biosignals using object detection (OD) methods to find sources of unwanted artifacts. Due to privacy restrictions, only blurred videos can be stored, which severely impairs the possibility to detect clinically relevant events such as interventions or changes in patient status with standard OD methods. Hence, new kinds of approaches are necessary that exploit every kind of available information due to the reduced information content of blurred footage and that are at the same time easily implementable within the IT infrastructure of a normal hospital. In this paper, we propose a new method for exploiting information in the temporal succession of video frames. To be efficiently implementable using off-the-shelf object detectors that comply with given hardware constraints, we repurpose the image color channels to account for temporal consistency, leading to an improved detection rate of the object classes. Our method outperforms a standard YOLOv5 baseline model by +1.7% mAP@.5 while also training over ten times faster on our proprietary dataset. We conclude that this approach has shown effectiveness in the preliminary experiments and holds potential for more general video OD in the future.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### A Closer Look at Geometric Temporal Dynamics for Face Anti-Spoofing
- **Authors:** Chih-Jung Chang, Yaw-Chern Lee, Shih-Hsuan Yao, Min-Hung Chen, Chien-Yi Wang, Shang-Hong Lai, Trista Pei-Chun Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.14313
- **Pdf link:** https://arxiv.org/pdf/2306.14313
- **Abstract**
Face anti-spoofing (FAS) is indispensable for a face recognition system. Many texture-driven countermeasures were developed against presentation attacks (PAs), but the performance against unseen domains or unseen spoofing types is still unsatisfactory. Instead of exhaustively collecting all the spoofing variations and making binary decisions of live/spoof, we offer a new perspective on the FAS task to distinguish between normal and abnormal movements of live and spoof presentations. We propose Geometry-Aware Interaction Network (GAIN), which exploits dense facial landmarks with spatio-temporal graph convolutional network (ST-GCN) to establish a more interpretable and modularized FAS model. Additionally, with our cross-attention feature interaction mechanism, GAIN can be easily integrated with other existing methods to significantly boost performance. Our approach achieves state-of-the-art performance in the standard intra- and cross-dataset evaluations. Moreover, our model outperforms state-of-the-art methods by a large margin in the cross-dataset cross-type protocol on CASIA-SURF 3DMask (+10.26% higher AUC score), exhibiting strong robustness against domain shifts and unseen spoofing types.
### Optimizing Kernel-Target Alignment for cloud detection in multispectral satellite images
- **Authors:** Artur Miroszewski, Jakub Mielczarek, Filip Szczepanek, Grzegorz Czelusta, Bartosz Grabowski, Bertrand Le Saux, Jakub Nalepa
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.14515
- **Pdf link:** https://arxiv.org/pdf/2306.14515
- **Abstract**
The optimization of Kernel-Target Alignment (TA) has been recently proposed as a way to reduce the number of hardware resources in quantum classifiers. It allows to exchange highly expressive and costly circuits to moderate size, task oriented ones. In this work we propose a simple toy model to study the optimization landscape of the Kernel-Target Alignment. We find that for underparameterized circuits the optimization landscape possess either many local extrema or becomes flat with narrow global extremum. We find the dependence of the width of the global extremum peak on the amount of data introduced to the model. The experimental study was performed using multispectral satellite data, and we targeted the cloud detection task, being one of the most fundamental and important image analysis tasks in remote sensing.
### Cross Architecture Distillation for Face Recognition
- **Authors:** Weisong Zhao, Xiangyu Zhu, Zhixiang He, Xiao-Yu Zhang, Zhen Lei
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.14662
- **Pdf link:** https://arxiv.org/pdf/2306.14662
- **Abstract**
Transformers have emerged as the superior choice for face recognition tasks, but their insufficient platform acceleration hinders their application on mobile devices. In contrast, Convolutional Neural Networks (CNNs) capitalize on hardware-compatible acceleration libraries. Consequently, it has become indispensable to preserve the distillation efficacy when transferring knowledge from a Transformer-based teacher model to a CNN-based student model, known as Cross-Architecture Knowledge Distillation (CAKD). Despite its potential, the deployment of CAKD in face recognition encounters two challenges: 1) the teacher and student share disparate spatial information for each pixel, obstructing the alignment of feature space, and 2) the teacher network is not trained in the role of a teacher, lacking proficiency in handling distillation-specific knowledge. To surmount these two constraints, 1) we first introduce a Unified Receptive Fields Mapping module (URFM) that maps pixel features of the teacher and student into local features with unified receptive fields, thereby synchronizing the pixel-wise spatial information of teacher and student. Subsequently, 2) we develop an Adaptable Prompting Teacher network (APT) that integrates prompts into the teacher, enabling it to manage distillation-specific knowledge while preserving the model's discriminative capacity. Extensive experiments on popular face benchmarks and two large-scale verification sets demonstrate the superiority of our method.
### MotionGPT: Human Motion as a Foreign Language
- **Authors:** Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, Tao Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2306.14795
- **Pdf link:** https://arxiv.org/pdf/2306.14795
- **Abstract**
Though the advancement of pre-trained large language models unfolds, the exploration of building a unified model for language and other multi-modal data, such as motion, remains challenging and untouched so far. Fortunately, human motion displays a semantic coupling akin to human language, often perceived as a form of body language. By fusing language data with large-scale motion models, motion-language pre-training that can enhance the performance of motion-related tasks becomes feasible. Driven by this insight, we propose MotionGPT, a unified, versatile, and user-friendly motion-language model to handle multiple motion-relevant tasks. Specifically, we employ the discrete vector quantization for human motion and transfer 3D motion into motion tokens, similar to the generation process of word tokens. Building upon this "motion vocabulary", we perform language modeling on both motion and text in a unified manner, treating human motion as a specific language. Moreover, inspired by prompt learning, we pre-train MotionGPT with a mixture of motion-language data and fine-tune it on prompt-based question-and-answer tasks. Extensive experiments demonstrate that MotionGPT achieves state-of-the-art performances on multiple motion tasks including text-driven motion generation, motion captioning, motion prediction, and motion in-between.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### SpikeCodec: An End-to-end Learned Compression Framework for Spiking Camera
- **Authors:** Kexiang Feng, Chuanmin Jia, Siwei Ma, Wen Gao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2306.14108
- **Pdf link:** https://arxiv.org/pdf/2306.14108
- **Abstract**
Recently, the bio-inspired spike camera with continuous motion recording capability has attracted tremendous attention due to its ultra high temporal resolution imaging characteristic. Such imaging feature results in huge data storage and transmission burden compared to that of traditional camera, raising severe challenge and imminent necessity in compression for spike camera captured content. Existing lossy data compression methods could not be applied for compressing spike streams efficiently due to integrate-and-fire characteristic and binarized data structure. Considering the imaging principle and information fidelity of spike cameras, we introduce an effective and robust representation of spike streams. Based on this representation, we propose a novel learned spike compression framework using scene recovery, variational auto-encoder plus spike simulator. To our knowledge, it is the first data-trained model for efficient and robust spike stream compression. Extensive experimental results show that our method outperforms the conventional and learning-based codecs, contributing a strong baseline for learned spike data compression.
### Feature Adversarial Distillation for Point Cloud Classification
- **Authors:** YuXing Lee, Wei Wu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.14221
- **Pdf link:** https://arxiv.org/pdf/2306.14221
- **Abstract**
Due to the point cloud's irregular and unordered geometry structure, conventional knowledge distillation technology lost a lot of information when directly used on point cloud tasks. In this paper, we propose Feature Adversarial Distillation (FAD) method, a generic adversarial loss function in point cloud distillation, to reduce loss during knowledge transfer.In the feature extraction stage, the features extracted by the teacher are used as the discriminator, and the students continuously generate new features in the training stage. The feature of the student is obtained by attacking the feedback from the teacher and getting a score to judge whether the student has learned the knowledge well or not. In experiments on standard point cloud classification on ModelNet40 and ScanObjectNN datasets, our method reduced the information loss of knowledge transfer in distillation in 40x model compression while maintaining competitive performance.
## Keyword: RAW
### Real-World Video for Zoom Enhancement based on Spatio-Temporal Coupling
- **Authors:** Zhiling Guo, Yinqiang Zheng, Haoran Zhang, Xiaodan Shi, Zekun Cai, Ryosuke Shibasaki, Jinyue Yan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2306.13875
- **Pdf link:** https://arxiv.org/pdf/2306.13875
- **Abstract**
In recent years, single-frame image super-resolution (SR) has become more realistic by considering the zooming effect and using real-world short- and long-focus image pairs. In this paper, we further investigate the feasibility of applying realistic multi-frame clips to enhance zoom quality via spatio-temporal information coupling. Specifically, we first built a real-world video benchmark, VideoRAW, by a synchronized co-axis optical system. The dataset contains paired short-focus raw and long-focus sRGB videos of different dynamic scenes. Based on VideoRAW, we then presented a Spatio-Temporal Coupling Loss, termed as STCL. The proposed STCL is intended for better utilization of information from paired and adjacent frames to align and fuse features both temporally and spatially at the feature level. The outperformed experimental results obtained in different zoom scenarios demonstrate the superiority of integrating real-world video dataset and STCL into existing SR models for zoom quality enhancement, and reveal that the proposed method can serve as an advanced and viable tool for video zoom.
### DesCo: Learning Object Recognition with Rich Language Descriptions
- **Authors:** Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.14060
- **Pdf link:** https://arxiv.org/pdf/2306.14060
- **Abstract**
Recent development in vision-language approaches has instigated a paradigm shift in learning visual recognition models from language supervision. These approaches align objects with language queries (e.g. "a photo of a cat") and improve the models' adaptability to identify novel objects and domains. Recently, several studies have attempted to query these models with complex language expressions that include specifications of fine-grained semantic details, such as attributes, shapes, textures, and relations. However, simply incorporating language descriptions as queries does not guarantee accurate interpretation by the models. In fact, our experiments show that GLIP, the state-of-the-art vision-language model for object detection, often disregards contextual information in the language descriptions and instead relies heavily on detecting objects solely by their names. To tackle the challenges, we propose a new description-conditioned (DesCo) paradigm of learning object recognition models with rich language descriptions consisting of two major innovations: 1) we employ a large language model as a commonsense knowledge engine to generate rich language descriptions of objects based on object names and the raw image-text caption; 2) we design context-sensitive queries to improve the model's ability in deciphering intricate nuances embedded within descriptions and enforce the model to focus on context rather than object names alone. On two novel object detection benchmarks, LVIS and OminiLabel, under the zero-shot detection setting, our approach achieves 34.8 APr minival (+9.1) and 29.3 AP (+3.6), respectively, surpassing the prior state-of-the-art models, GLIP and FIBER, by a large margin.
### A Gated Cross-domain Collaborative Network for Underwater Object Detection
- **Authors:** Linhui Dai, Hong Liu, Pinhao Song, Mengyuan Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.14141
- **Pdf link:** https://arxiv.org/pdf/2306.14141
- **Abstract**
Underwater object detection (UOD) plays a significant role in aquaculture and marine environmental protection. Considering the challenges posed by low contrast and low-light conditions in underwater environments, several underwater image enhancement (UIE) methods have been proposed to improve the quality of underwater images. However, only using the enhanced images does not improve the performance of UOD, since it may unavoidably remove or alter critical patterns and details of underwater objects. In contrast, we believe that exploring the complementary information from the two domains is beneficial for UOD. The raw image preserves the natural characteristics of the scene and texture information of the objects, while the enhanced image improves the visibility of underwater objects. Based on this perspective, we propose a Gated Cross-domain Collaborative Network (GCC-Net) to address the challenges of poor visibility and low contrast in underwater environments, which comprises three dedicated components. Firstly, a real-time UIE method is employed to generate enhanced images, which can improve the visibility of objects in low-contrast areas. Secondly, a cross-domain feature interaction module is introduced to facilitate the interaction and mine complementary information between raw and enhanced image features. Thirdly, to prevent the contamination of unreliable generated results, a gated feature fusion module is proposed to adaptively control the fusion ratio of cross-domain information. Our method presents a new UOD paradigm from the perspective of cross-domain information interaction and fusion. Experimental results demonstrate that the proposed GCC-Net achieves state-of-the-art performance on four underwater datasets.
### DiffSketcher: Text Guided Vector Sketch Synthesis through Latent Diffusion Models
- **Authors:** Ximing Xing, Chuang Wang, Haitao Zhou, Jing Zhang, Qian Yu, Dong Xu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2306.14685
- **Pdf link:** https://arxiv.org/pdf/2306.14685
- **Abstract**
Even though trained mainly on images, we discover that pretrained diffusion models show impressive power in guiding sketch synthesis. In this paper, we present DiffSketcher, an innovative algorithm that creates vectorized free-hand sketches using natural language input. DiffSketcher is developed based on a pre-trained text-to-image diffusion model. It performs the task by directly optimizing a set of Bezier curves with an extended version of the score distillation sampling (SDS) loss, which allows us to use a raster-level diffusion model as a prior for optimizing a parametric vectorized sketch generator. Furthermore, we explore attention maps embedded in the diffusion model for effective stroke initialization to speed up the generation process. The generated sketches demonstrate multiple levels of abstraction while maintaining recognizability, underlying structure, and essential visual details of the subject drawn. Our experiments show that DiffSketcher achieves greater quality than prior work.
## Keyword: raw image
### DesCo: Learning Object Recognition with Rich Language Descriptions
- **Authors:** Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.14060
- **Pdf link:** https://arxiv.org/pdf/2306.14060
- **Abstract**
Recent development in vision-language approaches has instigated a paradigm shift in learning visual recognition models from language supervision. These approaches align objects with language queries (e.g. "a photo of a cat") and improve the models' adaptability to identify novel objects and domains. Recently, several studies have attempted to query these models with complex language expressions that include specifications of fine-grained semantic details, such as attributes, shapes, textures, and relations. However, simply incorporating language descriptions as queries does not guarantee accurate interpretation by the models. In fact, our experiments show that GLIP, the state-of-the-art vision-language model for object detection, often disregards contextual information in the language descriptions and instead relies heavily on detecting objects solely by their names. To tackle the challenges, we propose a new description-conditioned (DesCo) paradigm of learning object recognition models with rich language descriptions consisting of two major innovations: 1) we employ a large language model as a commonsense knowledge engine to generate rich language descriptions of objects based on object names and the raw image-text caption; 2) we design context-sensitive queries to improve the model's ability in deciphering intricate nuances embedded within descriptions and enforce the model to focus on context rather than object names alone. On two novel object detection benchmarks, LVIS and OminiLabel, under the zero-shot detection setting, our approach achieves 34.8 APr minival (+9.1) and 29.3 AP (+3.6), respectively, surpassing the prior state-of-the-art models, GLIP and FIBER, by a large margin.
### A Gated Cross-domain Collaborative Network for Underwater Object Detection
- **Authors:** Linhui Dai, Hong Liu, Pinhao Song, Mengyuan Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.14141
- **Pdf link:** https://arxiv.org/pdf/2306.14141
- **Abstract**
Underwater object detection (UOD) plays a significant role in aquaculture and marine environmental protection. Considering the challenges posed by low contrast and low-light conditions in underwater environments, several underwater image enhancement (UIE) methods have been proposed to improve the quality of underwater images. However, only using the enhanced images does not improve the performance of UOD, since it may unavoidably remove or alter critical patterns and details of underwater objects. In contrast, we believe that exploring the complementary information from the two domains is beneficial for UOD. The raw image preserves the natural characteristics of the scene and texture information of the objects, while the enhanced image improves the visibility of underwater objects. Based on this perspective, we propose a Gated Cross-domain Collaborative Network (GCC-Net) to address the challenges of poor visibility and low contrast in underwater environments, which comprises three dedicated components. Firstly, a real-time UIE method is employed to generate enhanced images, which can improve the visibility of objects in low-contrast areas. Secondly, a cross-domain feature interaction module is introduced to facilitate the interaction and mine complementary information between raw and enhanced image features. Thirdly, to prevent the contamination of unreliable generated results, a gated feature fusion module is proposed to adaptively control the fusion ratio of cross-domain information. Our method presents a new UOD paradigm from the perspective of cross-domain information interaction and fusion. Experimental results demonstrate that the proposed GCC-Net achieves state-of-the-art performance on four underwater datasets.
|
2.0
|
New submissions for Tue, 27 Jun 23 - ## Keyword: events
### Masked conditional variational autoencoders for chromosome straightening
- **Authors:** Jingxiong Li, Sunyi Zheng, Zhongyi Shui, Shichuan Zhang, Linyi Yang, Yuxuan Sun, Yunlong Zhang, Honglin Li, Yuanxin Ye, Peter M.A. van Ooijen, Kang Li, Lin Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.14129
- **Pdf link:** https://arxiv.org/pdf/2306.14129
- **Abstract**
Karyotyping is of importance for detecting chromosomal aberrations in human disease. However, chromosomes easily appear curved in microscopic images, which prevents cytogeneticists from analyzing chromosome types. To address this issue, we propose a framework for chromosome straightening, which comprises a preliminary processing algorithm and a generative model called masked conditional variational autoencoders (MC-VAE). The processing method utilizes patch rearrangement to address the difficulty in erasing low degrees of curvature, providing reasonable preliminary results for the MC-VAE. The MC-VAE further straightens the results by leveraging chromosome patches conditioned on their curvatures to learn the mapping between banding patterns and conditions. During model training, we apply a masking strategy with a high masking ratio to train the MC-VAE with eliminated redundancy. This yields a non-trivial reconstruction task, allowing the model to effectively preserve chromosome banding patterns and structure details in the reconstructed results. Extensive experiments on three public datasets with two stain styles show that our framework surpasses the performance of state-of-the-art methods in retaining banding patterns and structure details. Compared to using real-world bent chromosomes, the use of high-quality straightened chromosomes generated by our proposed method can improve the performance of various deep learning models for chromosome classification by a large margin. Such a straightening approach has the potential to be combined with other karyotyping systems to assist cytogeneticists in chromosome analysis.
### Video object detection for privacy-preserving patient monitoring in intensive care
- **Authors:** Raphael Emberger (1), Jens Michael Boss (2), Daniel Baumann (2), Marko Seric (2), Shufan Huo (2 and 3), Lukas Tuggener (1), Emanuela Keller (2), Thilo Stadelmann (1 and 4) ((1) Centre for Artificial Intelligence, ZHAW School of Engineering, Winterthur, Switzerland, (2) Neurocritical Care Unit, Department of Neurosurgery and Institute of Intensive Care Medicine, Clinical Neuroscience Center, University Hospital Zurich and University of Zurich, Switzerland, (3) Neurology, Charité - University Medicine Berlin, Berlin, Germany, (4) European Centre for Living Technology (ECLT), Ca' Bottacin, Venice, Italy)
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2306.14620
- **Pdf link:** https://arxiv.org/pdf/2306.14620
- **Abstract**
Patient monitoring in intensive care units, although assisted by biosensors, needs continuous supervision of staff. To reduce the burden on staff members, IT infrastructures are built to record monitoring data and develop clinical decision support systems. These systems, however, are vulnerable to artifacts (e.g. muscle movement due to ongoing treatment), which are often indistinguishable from real and potentially dangerous signals. Video recordings could facilitate the reliable classification of biosignals using object detection (OD) methods to find sources of unwanted artifacts. Due to privacy restrictions, only blurred videos can be stored, which severely impairs the possibility to detect clinically relevant events such as interventions or changes in patient status with standard OD methods. Hence, new kinds of approaches are necessary that exploit every kind of available information due to the reduced information content of blurred footage and that are at the same time easily implementable within the IT infrastructure of a normal hospital. In this paper, we propose a new method for exploiting information in the temporal succession of video frames. To be efficiently implementable using off-the-shelf object detectors that comply with given hardware constraints, we repurpose the image color channels to account for temporal consistency, leading to an improved detection rate of the object classes. Our method outperforms a standard YOLOv5 baseline model by +1.7% mAP@.5 while also training over ten times faster on our proprietary dataset. We conclude that this approach has shown effectiveness in the preliminary experiments and holds potential for more general video OD in the future.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### A Closer Look at Geometric Temporal Dynamics for Face Anti-Spoofing
- **Authors:** Chih-Jung Chang, Yaw-Chern Lee, Shih-Hsuan Yao, Min-Hung Chen, Chien-Yi Wang, Shang-Hong Lai, Trista Pei-Chun Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.14313
- **Pdf link:** https://arxiv.org/pdf/2306.14313
- **Abstract**
Face anti-spoofing (FAS) is indispensable for a face recognition system. Many texture-driven countermeasures were developed against presentation attacks (PAs), but the performance against unseen domains or unseen spoofing types is still unsatisfactory. Instead of exhaustively collecting all the spoofing variations and making binary decisions of live/spoof, we offer a new perspective on the FAS task to distinguish between normal and abnormal movements of live and spoof presentations. We propose Geometry-Aware Interaction Network (GAIN), which exploits dense facial landmarks with spatio-temporal graph convolutional network (ST-GCN) to establish a more interpretable and modularized FAS model. Additionally, with our cross-attention feature interaction mechanism, GAIN can be easily integrated with other existing methods to significantly boost performance. Our approach achieves state-of-the-art performance in the standard intra- and cross-dataset evaluations. Moreover, our model outperforms state-of-the-art methods by a large margin in the cross-dataset cross-type protocol on CASIA-SURF 3DMask (+10.26% higher AUC score), exhibiting strong robustness against domain shifts and unseen spoofing types.
### Optimizing Kernel-Target Alignment for cloud detection in multispectral satellite images
- **Authors:** Artur Miroszewski, Jakub Mielczarek, Filip Szczepanek, Grzegorz Czelusta, Bartosz Grabowski, Bertrand Le Saux, Jakub Nalepa
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.14515
- **Pdf link:** https://arxiv.org/pdf/2306.14515
- **Abstract**
The optimization of Kernel-Target Alignment (TA) has been recently proposed as a way to reduce the number of hardware resources in quantum classifiers. It allows to exchange highly expressive and costly circuits to moderate size, task oriented ones. In this work we propose a simple toy model to study the optimization landscape of the Kernel-Target Alignment. We find that for underparameterized circuits the optimization landscape possess either many local extrema or becomes flat with narrow global extremum. We find the dependence of the width of the global extremum peak on the amount of data introduced to the model. The experimental study was performed using multispectral satellite data, and we targeted the cloud detection task, being one of the most fundamental and important image analysis tasks in remote sensing.
### Cross Architecture Distillation for Face Recognition
- **Authors:** Weisong Zhao, Xiangyu Zhu, Zhixiang He, Xiao-Yu Zhang, Zhen Lei
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.14662
- **Pdf link:** https://arxiv.org/pdf/2306.14662
- **Abstract**
Transformers have emerged as the superior choice for face recognition tasks, but their insufficient platform acceleration hinders their application on mobile devices. In contrast, Convolutional Neural Networks (CNNs) capitalize on hardware-compatible acceleration libraries. Consequently, it has become indispensable to preserve the distillation efficacy when transferring knowledge from a Transformer-based teacher model to a CNN-based student model, known as Cross-Architecture Knowledge Distillation (CAKD). Despite its potential, the deployment of CAKD in face recognition encounters two challenges: 1) the teacher and student share disparate spatial information for each pixel, obstructing the alignment of feature space, and 2) the teacher network is not trained in the role of a teacher, lacking proficiency in handling distillation-specific knowledge. To surmount these two constraints, 1) we first introduce a Unified Receptive Fields Mapping module (URFM) that maps pixel features of the teacher and student into local features with unified receptive fields, thereby synchronizing the pixel-wise spatial information of teacher and student. Subsequently, 2) we develop an Adaptable Prompting Teacher network (APT) that integrates prompts into the teacher, enabling it to manage distillation-specific knowledge while preserving the model's discriminative capacity. Extensive experiments on popular face benchmarks and two large-scale verification sets demonstrate the superiority of our method.
### MotionGPT: Human Motion as a Foreign Language
- **Authors:** Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, Tao Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Graphics (cs.GR)
- **Arxiv link:** https://arxiv.org/abs/2306.14795
- **Pdf link:** https://arxiv.org/pdf/2306.14795
- **Abstract**
Though the advancement of pre-trained large language models unfolds, the exploration of building a unified model for language and other multi-modal data, such as motion, remains challenging and untouched so far. Fortunately, human motion displays a semantic coupling akin to human language, often perceived as a form of body language. By fusing language data with large-scale motion models, motion-language pre-training that can enhance the performance of motion-related tasks becomes feasible. Driven by this insight, we propose MotionGPT, a unified, versatile, and user-friendly motion-language model to handle multiple motion-relevant tasks. Specifically, we employ the discrete vector quantization for human motion and transfer 3D motion into motion tokens, similar to the generation process of word tokens. Building upon this "motion vocabulary", we perform language modeling on both motion and text in a unified manner, treating human motion as a specific language. Moreover, inspired by prompt learning, we pre-train MotionGPT with a mixture of motion-language data and fine-tune it on prompt-based question-and-answer tasks. Extensive experiments demonstrate that MotionGPT achieves state-of-the-art performances on multiple motion tasks including text-driven motion generation, motion captioning, motion prediction, and motion in-between.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### SpikeCodec: An End-to-end Learned Compression Framework for Spiking Camera
- **Authors:** Kexiang Feng, Chuanmin Jia, Siwei Ma, Wen Gao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2306.14108
- **Pdf link:** https://arxiv.org/pdf/2306.14108
- **Abstract**
Recently, the bio-inspired spike camera with continuous motion recording capability has attracted tremendous attention due to its ultra high temporal resolution imaging characteristic. Such imaging feature results in huge data storage and transmission burden compared to that of traditional camera, raising severe challenge and imminent necessity in compression for spike camera captured content. Existing lossy data compression methods could not be applied for compressing spike streams efficiently due to integrate-and-fire characteristic and binarized data structure. Considering the imaging principle and information fidelity of spike cameras, we introduce an effective and robust representation of spike streams. Based on this representation, we propose a novel learned spike compression framework using scene recovery, variational auto-encoder plus spike simulator. To our knowledge, it is the first data-trained model for efficient and robust spike stream compression. Extensive experimental results show that our method outperforms the conventional and learning-based codecs, contributing a strong baseline for learned spike data compression.
### Feature Adversarial Distillation for Point Cloud Classification
- **Authors:** YuXing Lee, Wei Wu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.14221
- **Pdf link:** https://arxiv.org/pdf/2306.14221
- **Abstract**
Due to the point cloud's irregular and unordered geometry structure, conventional knowledge distillation technology lost a lot of information when directly used on point cloud tasks. In this paper, we propose Feature Adversarial Distillation (FAD) method, a generic adversarial loss function in point cloud distillation, to reduce loss during knowledge transfer.In the feature extraction stage, the features extracted by the teacher are used as the discriminator, and the students continuously generate new features in the training stage. The feature of the student is obtained by attacking the feedback from the teacher and getting a score to judge whether the student has learned the knowledge well or not. In experiments on standard point cloud classification on ModelNet40 and ScanObjectNN datasets, our method reduced the information loss of knowledge transfer in distillation in 40x model compression while maintaining competitive performance.
## Keyword: RAW
### Real-World Video for Zoom Enhancement based on Spatio-Temporal Coupling
- **Authors:** Zhiling Guo, Yinqiang Zheng, Haoran Zhang, Xiaodan Shi, Zekun Cai, Ryosuke Shibasaki, Jinyue Yan
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2306.13875
- **Pdf link:** https://arxiv.org/pdf/2306.13875
- **Abstract**
In recent years, single-frame image super-resolution (SR) has become more realistic by considering the zooming effect and using real-world short- and long-focus image pairs. In this paper, we further investigate the feasibility of applying realistic multi-frame clips to enhance zoom quality via spatio-temporal information coupling. Specifically, we first built a real-world video benchmark, VideoRAW, by a synchronized co-axis optical system. The dataset contains paired short-focus raw and long-focus sRGB videos of different dynamic scenes. Based on VideoRAW, we then presented a Spatio-Temporal Coupling Loss, termed as STCL. The proposed STCL is intended for better utilization of information from paired and adjacent frames to align and fuse features both temporally and spatially at the feature level. The outperformed experimental results obtained in different zoom scenarios demonstrate the superiority of integrating real-world video dataset and STCL into existing SR models for zoom quality enhancement, and reveal that the proposed method can serve as an advanced and viable tool for video zoom.
### DesCo: Learning Object Recognition with Rich Language Descriptions
- **Authors:** Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.14060
- **Pdf link:** https://arxiv.org/pdf/2306.14060
- **Abstract**
Recent development in vision-language approaches has instigated a paradigm shift in learning visual recognition models from language supervision. These approaches align objects with language queries (e.g. "a photo of a cat") and improve the models' adaptability to identify novel objects and domains. Recently, several studies have attempted to query these models with complex language expressions that include specifications of fine-grained semantic details, such as attributes, shapes, textures, and relations. However, simply incorporating language descriptions as queries does not guarantee accurate interpretation by the models. In fact, our experiments show that GLIP, the state-of-the-art vision-language model for object detection, often disregards contextual information in the language descriptions and instead relies heavily on detecting objects solely by their names. To tackle the challenges, we propose a new description-conditioned (DesCo) paradigm of learning object recognition models with rich language descriptions consisting of two major innovations: 1) we employ a large language model as a commonsense knowledge engine to generate rich language descriptions of objects based on object names and the raw image-text caption; 2) we design context-sensitive queries to improve the model's ability in deciphering intricate nuances embedded within descriptions and enforce the model to focus on context rather than object names alone. On two novel object detection benchmarks, LVIS and OminiLabel, under the zero-shot detection setting, our approach achieves 34.8 APr minival (+9.1) and 29.3 AP (+3.6), respectively, surpassing the prior state-of-the-art models, GLIP and FIBER, by a large margin.
### A Gated Cross-domain Collaborative Network for Underwater Object Detection
- **Authors:** Linhui Dai, Hong Liu, Pinhao Song, Mengyuan Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.14141
- **Pdf link:** https://arxiv.org/pdf/2306.14141
- **Abstract**
Underwater object detection (UOD) plays a significant role in aquaculture and marine environmental protection. Considering the challenges posed by low contrast and low-light conditions in underwater environments, several underwater image enhancement (UIE) methods have been proposed to improve the quality of underwater images. However, only using the enhanced images does not improve the performance of UOD, since it may unavoidably remove or alter critical patterns and details of underwater objects. In contrast, we believe that exploring the complementary information from the two domains is beneficial for UOD. The raw image preserves the natural characteristics of the scene and texture information of the objects, while the enhanced image improves the visibility of underwater objects. Based on this perspective, we propose a Gated Cross-domain Collaborative Network (GCC-Net) to address the challenges of poor visibility and low contrast in underwater environments, which comprises three dedicated components. Firstly, a real-time UIE method is employed to generate enhanced images, which can improve the visibility of objects in low-contrast areas. Secondly, a cross-domain feature interaction module is introduced to facilitate the interaction and mine complementary information between raw and enhanced image features. Thirdly, to prevent the contamination of unreliable generated results, a gated feature fusion module is proposed to adaptively control the fusion ratio of cross-domain information. Our method presents a new UOD paradigm from the perspective of cross-domain information interaction and fusion. Experimental results demonstrate that the proposed GCC-Net achieves state-of-the-art performance on four underwater datasets.
### DiffSketcher: Text Guided Vector Sketch Synthesis through Latent Diffusion Models
- **Authors:** Ximing Xing, Chuang Wang, Haitao Zhou, Jing Zhang, Qian Yu, Dong Xu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2306.14685
- **Pdf link:** https://arxiv.org/pdf/2306.14685
- **Abstract**
Even though trained mainly on images, we discover that pretrained diffusion models show impressive power in guiding sketch synthesis. In this paper, we present DiffSketcher, an innovative algorithm that creates vectorized free-hand sketches using natural language input. DiffSketcher is developed based on a pre-trained text-to-image diffusion model. It performs the task by directly optimizing a set of Bezier curves with an extended version of the score distillation sampling (SDS) loss, which allows us to use a raster-level diffusion model as a prior for optimizing a parametric vectorized sketch generator. Furthermore, we explore attention maps embedded in the diffusion model for effective stroke initialization to speed up the generation process. The generated sketches demonstrate multiple levels of abstraction while maintaining recognizability, underlying structure, and essential visual details of the subject drawn. Our experiments show that DiffSketcher achieves greater quality than prior work.
## Keyword: raw image
### DesCo: Learning Object Recognition with Rich Language Descriptions
- **Authors:** Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2306.14060
- **Pdf link:** https://arxiv.org/pdf/2306.14060
- **Abstract**
Recent development in vision-language approaches has instigated a paradigm shift in learning visual recognition models from language supervision. These approaches align objects with language queries (e.g. "a photo of a cat") and improve the models' adaptability to identify novel objects and domains. Recently, several studies have attempted to query these models with complex language expressions that include specifications of fine-grained semantic details, such as attributes, shapes, textures, and relations. However, simply incorporating language descriptions as queries does not guarantee accurate interpretation by the models. In fact, our experiments show that GLIP, the state-of-the-art vision-language model for object detection, often disregards contextual information in the language descriptions and instead relies heavily on detecting objects solely by their names. To tackle the challenges, we propose a new description-conditioned (DesCo) paradigm of learning object recognition models with rich language descriptions consisting of two major innovations: 1) we employ a large language model as a commonsense knowledge engine to generate rich language descriptions of objects based on object names and the raw image-text caption; 2) we design context-sensitive queries to improve the model's ability in deciphering intricate nuances embedded within descriptions and enforce the model to focus on context rather than object names alone. On two novel object detection benchmarks, LVIS and OminiLabel, under the zero-shot detection setting, our approach achieves 34.8 APr minival (+9.1) and 29.3 AP (+3.6), respectively, surpassing the prior state-of-the-art models, GLIP and FIBER, by a large margin.
### A Gated Cross-domain Collaborative Network for Underwater Object Detection
- **Authors:** Linhui Dai, Hong Liu, Pinhao Song, Mengyuan Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2306.14141
- **Pdf link:** https://arxiv.org/pdf/2306.14141
- **Abstract**
Underwater object detection (UOD) plays a significant role in aquaculture and marine environmental protection. Considering the challenges posed by low contrast and low-light conditions in underwater environments, several underwater image enhancement (UIE) methods have been proposed to improve the quality of underwater images. However, only using the enhanced images does not improve the performance of UOD, since it may unavoidably remove or alter critical patterns and details of underwater objects. In contrast, we believe that exploring the complementary information from the two domains is beneficial for UOD. The raw image preserves the natural characteristics of the scene and texture information of the objects, while the enhanced image improves the visibility of underwater objects. Based on this perspective, we propose a Gated Cross-domain Collaborative Network (GCC-Net) to address the challenges of poor visibility and low contrast in underwater environments, which comprises three dedicated components. Firstly, a real-time UIE method is employed to generate enhanced images, which can improve the visibility of objects in low-contrast areas. Secondly, a cross-domain feature interaction module is introduced to facilitate the interaction and mine complementary information between raw and enhanced image features. Thirdly, to prevent the contamination of unreliable generated results, a gated feature fusion module is proposed to adaptively control the fusion ratio of cross-domain information. Our method presents a new UOD paradigm from the perspective of cross-domain information interaction and fusion. Experimental results demonstrate that the proposed GCC-Net achieves state-of-the-art performance on four underwater datasets.
|
process
|
new submissions for tue jun keyword events masked conditional variational autoencoders for chromosome straightening authors jingxiong li sunyi zheng zhongyi shui shichuan zhang linyi yang yuxuan sun yunlong zhang honglin li yuanxin ye peter m a van ooijen kang li lin yang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract karyotyping is of importance for detecting chromosomal aberrations in human disease however chromosomes easily appear curved in microscopic images which prevents cytogeneticists from analyzing chromosome types to address this issue we propose a framework for chromosome straightening which comprises a preliminary processing algorithm and a generative model called masked conditional variational autoencoders mc vae the processing method utilizes patch rearrangement to address the difficulty in erasing low degrees of curvature providing reasonable preliminary results for the mc vae the mc vae further straightens the results by leveraging chromosome patches conditioned on their curvatures to learn the mapping between banding patterns and conditions during model training we apply a masking strategy with a high masking ratio to train the mc vae with eliminated redundancy this yields a non trivial reconstruction task allowing the model to effectively preserve chromosome banding patterns and structure details in the reconstructed results extensive experiments on three public datasets with two stain styles show that our framework surpasses the performance of state of the art methods in retaining banding patterns and structure details compared to using real world bent chromosomes the use of high quality straightened chromosomes generated by our proposed method can improve the performance of various deep learning models for chromosome classification by a large margin such a straightening approach has the potential to be combined with other karyotyping systems to assist cytogeneticists in chromosome analysis video object detection for privacy preserving patient monitoring in intensive care authors raphael emberger jens michael boss daniel baumann marko seric shufan huo and lukas tuggener emanuela keller thilo stadelmann and centre for artificial intelligence zhaw school of engineering winterthur switzerland neurocritical care unit department of neurosurgery and institute of intensive care medicine clinical neuroscience center university hospital zurich and university of zurich switzerland neurology charité university medicine berlin berlin germany european centre for living technology eclt ca bottacin venice italy subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract patient monitoring in intensive care units although assisted by biosensors needs continuous supervision of staff to reduce the burden on staff members it infrastructures are built to record monitoring data and develop clinical decision support systems these systems however are vulnerable to artifacts e g muscle movement due to ongoing treatment which are often indistinguishable from real and potentially dangerous signals video recordings could facilitate the reliable classification of biosignals using object detection od methods to find sources of unwanted artifacts due to privacy restrictions only blurred videos can be stored which severely impairs the possibility to detect clinically relevant events such as interventions or changes in patient status with standard od methods hence new kinds of approaches are necessary that exploit every kind of available information due to the reduced information content of blurred footage and that are at the same time easily implementable within the it infrastructure of a normal hospital in this paper we propose a new method for exploiting information in the temporal succession of video frames to be efficiently implementable using off the shelf object detectors that comply with given hardware constraints we repurpose the image color channels to account for temporal consistency leading to an improved detection rate of the object classes our method outperforms a standard baseline model by map while also training over ten times faster on our proprietary dataset we conclude that this approach has shown effectiveness in the preliminary experiments and holds potential for more general video od in the future keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp a closer look at geometric temporal dynamics for face anti spoofing authors chih jung chang yaw chern lee shih hsuan yao min hung chen chien yi wang shang hong lai trista pei chun chen subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract face anti spoofing fas is indispensable for a face recognition system many texture driven countermeasures were developed against presentation attacks pas but the performance against unseen domains or unseen spoofing types is still unsatisfactory instead of exhaustively collecting all the spoofing variations and making binary decisions of live spoof we offer a new perspective on the fas task to distinguish between normal and abnormal movements of live and spoof presentations we propose geometry aware interaction network gain which exploits dense facial landmarks with spatio temporal graph convolutional network st gcn to establish a more interpretable and modularized fas model additionally with our cross attention feature interaction mechanism gain can be easily integrated with other existing methods to significantly boost performance our approach achieves state of the art performance in the standard intra and cross dataset evaluations moreover our model outperforms state of the art methods by a large margin in the cross dataset cross type protocol on casia surf higher auc score exhibiting strong robustness against domain shifts and unseen spoofing types optimizing kernel target alignment for cloud detection in multispectral satellite images authors artur miroszewski jakub mielczarek filip szczepanek grzegorz czelusta bartosz grabowski bertrand le saux jakub nalepa subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract the optimization of kernel target alignment ta has been recently proposed as a way to reduce the number of hardware resources in quantum classifiers it allows to exchange highly expressive and costly circuits to moderate size task oriented ones in this work we propose a simple toy model to study the optimization landscape of the kernel target alignment we find that for underparameterized circuits the optimization landscape possess either many local extrema or becomes flat with narrow global extremum we find the dependence of the width of the global extremum peak on the amount of data introduced to the model the experimental study was performed using multispectral satellite data and we targeted the cloud detection task being one of the most fundamental and important image analysis tasks in remote sensing cross architecture distillation for face recognition authors weisong zhao xiangyu zhu zhixiang he xiao yu zhang zhen lei subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract transformers have emerged as the superior choice for face recognition tasks but their insufficient platform acceleration hinders their application on mobile devices in contrast convolutional neural networks cnns capitalize on hardware compatible acceleration libraries consequently it has become indispensable to preserve the distillation efficacy when transferring knowledge from a transformer based teacher model to a cnn based student model known as cross architecture knowledge distillation cakd despite its potential the deployment of cakd in face recognition encounters two challenges the teacher and student share disparate spatial information for each pixel obstructing the alignment of feature space and the teacher network is not trained in the role of a teacher lacking proficiency in handling distillation specific knowledge to surmount these two constraints we first introduce a unified receptive fields mapping module urfm that maps pixel features of the teacher and student into local features with unified receptive fields thereby synchronizing the pixel wise spatial information of teacher and student subsequently we develop an adaptable prompting teacher network apt that integrates prompts into the teacher enabling it to manage distillation specific knowledge while preserving the model s discriminative capacity extensive experiments on popular face benchmarks and two large scale verification sets demonstrate the superiority of our method motiongpt human motion as a foreign language authors biao jiang xin chen wen liu jingyi yu gang yu tao chen subjects computer vision and pattern recognition cs cv computation and language cs cl graphics cs gr arxiv link pdf link abstract though the advancement of pre trained large language models unfolds the exploration of building a unified model for language and other multi modal data such as motion remains challenging and untouched so far fortunately human motion displays a semantic coupling akin to human language often perceived as a form of body language by fusing language data with large scale motion models motion language pre training that can enhance the performance of motion related tasks becomes feasible driven by this insight we propose motiongpt a unified versatile and user friendly motion language model to handle multiple motion relevant tasks specifically we employ the discrete vector quantization for human motion and transfer motion into motion tokens similar to the generation process of word tokens building upon this motion vocabulary we perform language modeling on both motion and text in a unified manner treating human motion as a specific language moreover inspired by prompt learning we pre train motiongpt with a mixture of motion language data and fine tune it on prompt based question and answer tasks extensive experiments demonstrate that motiongpt achieves state of the art performances on multiple motion tasks including text driven motion generation motion captioning motion prediction and motion in between keyword image signal processing there is no result keyword image signal process there is no result keyword compression spikecodec an end to end learned compression framework for spiking camera authors kexiang feng chuanmin jia siwei ma wen gao subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract recently the bio inspired spike camera with continuous motion recording capability has attracted tremendous attention due to its ultra high temporal resolution imaging characteristic such imaging feature results in huge data storage and transmission burden compared to that of traditional camera raising severe challenge and imminent necessity in compression for spike camera captured content existing lossy data compression methods could not be applied for compressing spike streams efficiently due to integrate and fire characteristic and binarized data structure considering the imaging principle and information fidelity of spike cameras we introduce an effective and robust representation of spike streams based on this representation we propose a novel learned spike compression framework using scene recovery variational auto encoder plus spike simulator to our knowledge it is the first data trained model for efficient and robust spike stream compression extensive experimental results show that our method outperforms the conventional and learning based codecs contributing a strong baseline for learned spike data compression feature adversarial distillation for point cloud classification authors yuxing lee wei wu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract due to the point cloud s irregular and unordered geometry structure conventional knowledge distillation technology lost a lot of information when directly used on point cloud tasks in this paper we propose feature adversarial distillation fad method a generic adversarial loss function in point cloud distillation to reduce loss during knowledge transfer in the feature extraction stage the features extracted by the teacher are used as the discriminator and the students continuously generate new features in the training stage the feature of the student is obtained by attacking the feedback from the teacher and getting a score to judge whether the student has learned the knowledge well or not in experiments on standard point cloud classification on and scanobjectnn datasets our method reduced the information loss of knowledge transfer in distillation in model compression while maintaining competitive performance keyword raw real world video for zoom enhancement based on spatio temporal coupling authors zhiling guo yinqiang zheng haoran zhang xiaodan shi zekun cai ryosuke shibasaki jinyue yan subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract in recent years single frame image super resolution sr has become more realistic by considering the zooming effect and using real world short and long focus image pairs in this paper we further investigate the feasibility of applying realistic multi frame clips to enhance zoom quality via spatio temporal information coupling specifically we first built a real world video benchmark videoraw by a synchronized co axis optical system the dataset contains paired short focus raw and long focus srgb videos of different dynamic scenes based on videoraw we then presented a spatio temporal coupling loss termed as stcl the proposed stcl is intended for better utilization of information from paired and adjacent frames to align and fuse features both temporally and spatially at the feature level the outperformed experimental results obtained in different zoom scenarios demonstrate the superiority of integrating real world video dataset and stcl into existing sr models for zoom quality enhancement and reveal that the proposed method can serve as an advanced and viable tool for video zoom desco learning object recognition with rich language descriptions authors liunian harold li zi yi dou nanyun peng kai wei chang subjects computer vision and pattern recognition cs cv computation and language cs cl machine learning cs lg arxiv link pdf link abstract recent development in vision language approaches has instigated a paradigm shift in learning visual recognition models from language supervision these approaches align objects with language queries e g a photo of a cat and improve the models adaptability to identify novel objects and domains recently several studies have attempted to query these models with complex language expressions that include specifications of fine grained semantic details such as attributes shapes textures and relations however simply incorporating language descriptions as queries does not guarantee accurate interpretation by the models in fact our experiments show that glip the state of the art vision language model for object detection often disregards contextual information in the language descriptions and instead relies heavily on detecting objects solely by their names to tackle the challenges we propose a new description conditioned desco paradigm of learning object recognition models with rich language descriptions consisting of two major innovations we employ a large language model as a commonsense knowledge engine to generate rich language descriptions of objects based on object names and the raw image text caption we design context sensitive queries to improve the model s ability in deciphering intricate nuances embedded within descriptions and enforce the model to focus on context rather than object names alone on two novel object detection benchmarks lvis and ominilabel under the zero shot detection setting our approach achieves apr minival and ap respectively surpassing the prior state of the art models glip and fiber by a large margin a gated cross domain collaborative network for underwater object detection authors linhui dai hong liu pinhao song mengyuan liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract underwater object detection uod plays a significant role in aquaculture and marine environmental protection considering the challenges posed by low contrast and low light conditions in underwater environments several underwater image enhancement uie methods have been proposed to improve the quality of underwater images however only using the enhanced images does not improve the performance of uod since it may unavoidably remove or alter critical patterns and details of underwater objects in contrast we believe that exploring the complementary information from the two domains is beneficial for uod the raw image preserves the natural characteristics of the scene and texture information of the objects while the enhanced image improves the visibility of underwater objects based on this perspective we propose a gated cross domain collaborative network gcc net to address the challenges of poor visibility and low contrast in underwater environments which comprises three dedicated components firstly a real time uie method is employed to generate enhanced images which can improve the visibility of objects in low contrast areas secondly a cross domain feature interaction module is introduced to facilitate the interaction and mine complementary information between raw and enhanced image features thirdly to prevent the contamination of unreliable generated results a gated feature fusion module is proposed to adaptively control the fusion ratio of cross domain information our method presents a new uod paradigm from the perspective of cross domain information interaction and fusion experimental results demonstrate that the proposed gcc net achieves state of the art performance on four underwater datasets diffsketcher text guided vector sketch synthesis through latent diffusion models authors ximing xing chuang wang haitao zhou jing zhang qian yu dong xu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract even though trained mainly on images we discover that pretrained diffusion models show impressive power in guiding sketch synthesis in this paper we present diffsketcher an innovative algorithm that creates vectorized free hand sketches using natural language input diffsketcher is developed based on a pre trained text to image diffusion model it performs the task by directly optimizing a set of bezier curves with an extended version of the score distillation sampling sds loss which allows us to use a raster level diffusion model as a prior for optimizing a parametric vectorized sketch generator furthermore we explore attention maps embedded in the diffusion model for effective stroke initialization to speed up the generation process the generated sketches demonstrate multiple levels of abstraction while maintaining recognizability underlying structure and essential visual details of the subject drawn our experiments show that diffsketcher achieves greater quality than prior work keyword raw image desco learning object recognition with rich language descriptions authors liunian harold li zi yi dou nanyun peng kai wei chang subjects computer vision and pattern recognition cs cv computation and language cs cl machine learning cs lg arxiv link pdf link abstract recent development in vision language approaches has instigated a paradigm shift in learning visual recognition models from language supervision these approaches align objects with language queries e g a photo of a cat and improve the models adaptability to identify novel objects and domains recently several studies have attempted to query these models with complex language expressions that include specifications of fine grained semantic details such as attributes shapes textures and relations however simply incorporating language descriptions as queries does not guarantee accurate interpretation by the models in fact our experiments show that glip the state of the art vision language model for object detection often disregards contextual information in the language descriptions and instead relies heavily on detecting objects solely by their names to tackle the challenges we propose a new description conditioned desco paradigm of learning object recognition models with rich language descriptions consisting of two major innovations we employ a large language model as a commonsense knowledge engine to generate rich language descriptions of objects based on object names and the raw image text caption we design context sensitive queries to improve the model s ability in deciphering intricate nuances embedded within descriptions and enforce the model to focus on context rather than object names alone on two novel object detection benchmarks lvis and ominilabel under the zero shot detection setting our approach achieves apr minival and ap respectively surpassing the prior state of the art models glip and fiber by a large margin a gated cross domain collaborative network for underwater object detection authors linhui dai hong liu pinhao song mengyuan liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract underwater object detection uod plays a significant role in aquaculture and marine environmental protection considering the challenges posed by low contrast and low light conditions in underwater environments several underwater image enhancement uie methods have been proposed to improve the quality of underwater images however only using the enhanced images does not improve the performance of uod since it may unavoidably remove or alter critical patterns and details of underwater objects in contrast we believe that exploring the complementary information from the two domains is beneficial for uod the raw image preserves the natural characteristics of the scene and texture information of the objects while the enhanced image improves the visibility of underwater objects based on this perspective we propose a gated cross domain collaborative network gcc net to address the challenges of poor visibility and low contrast in underwater environments which comprises three dedicated components firstly a real time uie method is employed to generate enhanced images which can improve the visibility of objects in low contrast areas secondly a cross domain feature interaction module is introduced to facilitate the interaction and mine complementary information between raw and enhanced image features thirdly to prevent the contamination of unreliable generated results a gated feature fusion module is proposed to adaptively control the fusion ratio of cross domain information our method presents a new uod paradigm from the perspective of cross domain information interaction and fusion experimental results demonstrate that the proposed gcc net achieves state of the art performance on four underwater datasets
| 1
|
1,595
| 4,204,750,975
|
IssuesEvent
|
2016-06-28 11:07:19
|
matz-e/lobster
|
https://api.github.com/repos/matz-e/lobster
|
closed
|
Add gsiftp protocol for stage-out
|
processing
|
Until we've scraped every last bit of bestman from the servers, we should provide `gsiftp://` as a more reliable alternative to `srm://`.
|
1.0
|
Add gsiftp protocol for stage-out - Until we've scraped every last bit of bestman from the servers, we should provide `gsiftp://` as a more reliable alternative to `srm://`.
|
process
|
add gsiftp protocol for stage out until we ve scraped every last bit of bestman from the servers we should provide gsiftp as a more reliable alternative to srm
| 1
|
17,630
| 23,446,901,728
|
IssuesEvent
|
2022-08-15 20:39:03
|
googleapis/release-please
|
https://api.github.com/repos/googleapis/release-please
|
closed
|
Drop Node 12 support
|
type: process
|
Node 12 went EOL April 2022.
Our octokit dependencies took a major version bump to drop Node 12 support:
* https://github.com/googleapis/release-please/pull/1515
* https://github.com/googleapis/release-please/pull/1516
* https://github.com/googleapis/release-please/pull/1521
|
1.0
|
Drop Node 12 support - Node 12 went EOL April 2022.
Our octokit dependencies took a major version bump to drop Node 12 support:
* https://github.com/googleapis/release-please/pull/1515
* https://github.com/googleapis/release-please/pull/1516
* https://github.com/googleapis/release-please/pull/1521
|
process
|
drop node support node went eol april our octokit dependencies took a major version bump to drop node support
| 1
|
17,128
| 3,594,474,897
|
IssuesEvent
|
2016-02-01 23:52:35
|
googlefonts/fontbakery
|
https://api.github.com/repos/googlefonts/fontbakery
|
closed
|
Extend checker to allow automated fixes
|
testing
|
Following #221
Tests can have associated automatic fixes, so if a test fails, it can be automatically fixed.
For example, there should be a test for a nbsp glyph present, a space glyph present, correct unicode points assigned to each, and both with the same width
If this test fails, the fix is to make a nbsp with the width of the space. Behdad wrote [a fonttools script to do exactly this](https://gist.github.com/davelab6/5c865bb658b05c7bc37c). The moment for this to be run during the build process is after ufo/ttx is compiled to ttf and before ttfautohinting and subsetting.
These tests and fixes should be run on the final TTFs produced by the build process only, since the failing tests will alert people to make the fixes upstream.
- [x] show process of fixer in build log
|
1.0
|
Extend checker to allow automated fixes - Following #221
Tests can have associated automatic fixes, so if a test fails, it can be automatically fixed.
For example, there should be a test for a nbsp glyph present, a space glyph present, correct unicode points assigned to each, and both with the same width
If this test fails, the fix is to make a nbsp with the width of the space. Behdad wrote [a fonttools script to do exactly this](https://gist.github.com/davelab6/5c865bb658b05c7bc37c). The moment for this to be run during the build process is after ufo/ttx is compiled to ttf and before ttfautohinting and subsetting.
These tests and fixes should be run on the final TTFs produced by the build process only, since the failing tests will alert people to make the fixes upstream.
- [x] show process of fixer in build log
|
non_process
|
extend checker to allow automated fixes following tests can have associated automatic fixes so if a test fails it can be automatically fixed for example there should be a test for a nbsp glyph present a space glyph present correct unicode points assigned to each and both with the same width if this test fails the fix is to make a nbsp with the width of the space behdad wrote the moment for this to be run during the build process is after ufo ttx is compiled to ttf and before ttfautohinting and subsetting these tests and fixes should be run on the final ttfs produced by the build process only since the failing tests will alert people to make the fixes upstream show process of fixer in build log
| 0
|
22,657
| 31,895,828,193
|
IssuesEvent
|
2023-09-18 01:32:01
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - lowestBiostratigraphicZone
|
Term - change Class - GeologicalContext normative Task Group - Material Sample Process - complete
|
## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_lowestBiostratigraphicZone
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): lowestBiostratigraphicZone
* Term label (English, not normative): Lowest Biostratigraphic Zone
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the lowest possible geological biostratigraphic zone of the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Maastrichtian
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
1.0
|
Change term - lowestBiostratigraphicZone - ## Term change
* Submitter: [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/)
* Efficacy Justification (why is this change necessary?): Create consistency of terms for material in Darwin Core.
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): [Material Sample Task Group](https://www.tdwg.org/community/osr/material-sample/), which includes representatives of over 10 organizations.
* Stability Justification (what concerns are there that this might affect existing implementations?): None
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: No
Current Term definition: https://dwc.tdwg.org/list/#dwc_lowestBiostratigraphicZone
Proposed attributes of the new term version (Please put actual changes to be implemented in **bold** and ~strikethrough~):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): lowestBiostratigraphicZone
* Term label (English, not normative): Lowest Biostratigraphic Zone
* * Organized in Class (e.g., Occurrence, Event, Location, Taxon): Geological Context
* Definition of the term (normative): The full name of the lowest possible geological biostratigraphic zone of the stratigraphic horizon from which the ~~cataloged item~~**dwc:MaterialEntity** was collected.
* Usage comments (recommendations regarding content, etc., not normative):
* Examples (not normative): Maastrichtian
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
|
process
|
change term lowestbiostratigraphiczone term change submitter efficacy justification why is this change necessary create consistency of terms for material in darwin core demand justification if the change is semantic in nature name at least two organizations that independently need this term which includes representatives of over organizations stability justification what concerns are there that this might affect existing implementations none implications for dwciri namespace does this change affect a dwciri term version no current term definition proposed attributes of the new term version please put actual changes to be implemented in bold and strikethrough term name in lowercamelcase for properties uppercamelcase for classes lowestbiostratigraphiczone term label english not normative lowest biostratigraphic zone organized in class e g occurrence event location taxon geological context definition of the term normative the full name of the lowest possible geological biostratigraphic zone of the stratigraphic horizon from which the cataloged item dwc materialentity was collected usage comments recommendations regarding content etc not normative examples not normative maastrichtian refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative none abcd xpath of the equivalent term in abcd or efg not normative not in abcd
| 1
|
12,563
| 14,981,424,661
|
IssuesEvent
|
2021-01-28 14:49:49
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Running multiple findUnique's in parallel causes both to return null
|
bug/2-confirmed kind/bug process/candidate team/client tech/engines
|
<!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
When running two of the same findUnique queries in parallel, both return null where they should return the same document.
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
1. Create a new Prisma project with a PostgreSQL database.
2. Add a table with a @@unique attribute
```prisma
datasource db {
provider = "postgresql"
url = "URL"
}
generator client {
provider = "prisma-client-js"
}
model Property {
id String @id @default(cuid())
Availability Availability[]
}
model Availability {
date DateTime
property Property @relation(fields: [propertyId], references: [id])
propertyId String
status AvailabilityStatus @default(AVAILABLE)
@@unique([propertyId, date, status])
}
enum AvailabilityStatus {
AVAILABLE
}
```
3. Create a new set of documents in the table
```JavaScript
const property = await prisma.property.create({});
const targetDate = new Date('2020-12-03T00:00:00.000Z');
await prisma.availability.create({
data: {
property: { connect: {id: property.id }},
status: AvailabilityStatus.AVAILABLE,
date: targetDate,
},
});
```
4. Query the document twice in parallel
```JavaScript
const firstUnique = prisma.availability.findUnique({
where: {
propertyId_date_status: {
propertyId: property.id,
status: AvailabilityStatus.AVAILABLE,
date: targetDate,
}
},
});
const secondUnique = prisma.availability.findUnique({
where: {
propertyId_date_status: {
propertyId: property.id,
status: AvailabilityStatus.AVAILABLE,
date: targetDate,
}
},
});
const res = await Promise.all([firstUnique, secondUnique])
console.log(res); <-- [null, null]
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Expected to return the same document twice - a warning would also be super helpful if this happens.
## Prisma information
<!-- Your Prisma schema, Prisma Client queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Mac
- Database: PostgreSQL
- Node.js version: 12.18.0
- Prisma version: 2.12.1
<!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]-->
```
@prisma/cli : 2.12.1
@prisma/client : 2.12.1
Current platform : darwin
Query Engine : query-engine cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules/@prisma/engines/query-engine-darwin)
Migration Engine : migration-engine-cli cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary : prisma-fmt cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules/@prisma/engines/prisma-fmt-darwin)
Studio : 0.322.0
```
|
1.0
|
Running multiple findUnique's in parallel causes both to return null - <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
When running two of the same findUnique queries in parallel, both return null where they should return the same document.
## How to reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
1. Create a new Prisma project with a PostgreSQL database.
2. Add a table with a @@unique attribute
```prisma
datasource db {
provider = "postgresql"
url = "URL"
}
generator client {
provider = "prisma-client-js"
}
model Property {
id String @id @default(cuid())
Availability Availability[]
}
model Availability {
date DateTime
property Property @relation(fields: [propertyId], references: [id])
propertyId String
status AvailabilityStatus @default(AVAILABLE)
@@unique([propertyId, date, status])
}
enum AvailabilityStatus {
AVAILABLE
}
```
3. Create a new set of documents in the table
```JavaScript
const property = await prisma.property.create({});
const targetDate = new Date('2020-12-03T00:00:00.000Z');
await prisma.availability.create({
data: {
property: { connect: {id: property.id }},
status: AvailabilityStatus.AVAILABLE,
date: targetDate,
},
});
```
4. Query the document twice in parallel
```JavaScript
const firstUnique = prisma.availability.findUnique({
where: {
propertyId_date_status: {
propertyId: property.id,
status: AvailabilityStatus.AVAILABLE,
date: targetDate,
}
},
});
const secondUnique = prisma.availability.findUnique({
where: {
propertyId_date_status: {
propertyId: property.id,
status: AvailabilityStatus.AVAILABLE,
date: targetDate,
}
},
});
const res = await Promise.all([firstUnique, secondUnique])
console.log(res); <-- [null, null]
```
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Expected to return the same document twice - a warning would also be super helpful if this happens.
## Prisma information
<!-- Your Prisma schema, Prisma Client queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Mac
- Database: PostgreSQL
- Node.js version: 12.18.0
- Prisma version: 2.12.1
<!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]-->
```
@prisma/cli : 2.12.1
@prisma/client : 2.12.1
Current platform : darwin
Query Engine : query-engine cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules/@prisma/engines/query-engine-darwin)
Migration Engine : migration-engine-cli cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules/@prisma/engines/migration-engine-darwin)
Introspection Engine : introspection-core cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules/@prisma/engines/introspection-engine-darwin)
Format Binary : prisma-fmt cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules/@prisma/engines/prisma-fmt-darwin)
Studio : 0.322.0
```
|
process
|
running multiple findunique s in parallel causes both to return null thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description when running two of the same findunique queries in parallel both return null where they should return the same document how to reproduce steps to reproduce the behavior go to change run see error create a new prisma project with a postgresql database add a table with a unique attribute prisma datasource db provider postgresql url url generator client provider prisma client js model property id string id default cuid availability availability model availability date datetime property property relation fields references propertyid string status availabilitystatus default available unique enum availabilitystatus available create a new set of documents in the table javascript const property await prisma property create const targetdate new date await prisma availability create data property connect id property id status availabilitystatus available date targetdate query the document twice in parallel javascript const firstunique prisma availability findunique where propertyid date status propertyid property id status availabilitystatus available date targetdate const secondunique prisma availability findunique where propertyid date status propertyid property id status availabilitystatus available date targetdate const res await promise all console log res expected behavior expected to return the same document twice a warning would also be super helpful if this happens prisma information your prisma schema prisma client queries do not include your database credentials when sharing your prisma schema environment setup os mac database postgresql node js version prisma version prisma cli prisma client current platform darwin query engine query engine at node modules prisma engines query engine darwin migration engine migration engine cli at node modules prisma engines migration engine darwin introspection engine introspection core at node modules prisma engines introspection engine darwin format binary prisma fmt at node modules prisma engines prisma fmt darwin studio
| 1
|
1,875
| 4,699,731,026
|
IssuesEvent
|
2016-10-12 16:28:31
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Search on some X page with pagination
|
inprocess
|
Hi @AllenFang
I am trying to figure out an error. I have 800 data records and they are shown 20 per page and pagination size 10 with pagination feature. But when i change the page to suppose 10 and then do a search, i get an error that says my first **TableHeaderColumn** dataField is undefined. "Cannot read property X of undefined". Whereas it loads the table without any error on initial load and on page change. So its just the search on other pages thats broken. Any idea whats going on with this ?
|
1.0
|
Search on some X page with pagination - Hi @AllenFang
I am trying to figure out an error. I have 800 data records and they are shown 20 per page and pagination size 10 with pagination feature. But when i change the page to suppose 10 and then do a search, i get an error that says my first **TableHeaderColumn** dataField is undefined. "Cannot read property X of undefined". Whereas it loads the table without any error on initial load and on page change. So its just the search on other pages thats broken. Any idea whats going on with this ?
|
process
|
search on some x page with pagination hi allenfang i am trying to figure out an error i have data records and they are shown per page and pagination size with pagination feature but when i change the page to suppose and then do a search i get an error that says my first tableheadercolumn datafield is undefined cannot read property x of undefined whereas it loads the table without any error on initial load and on page change so its just the search on other pages thats broken any idea whats going on with this
| 1
|
425,097
| 29,191,974,447
|
IssuesEvent
|
2023-05-19 21:01:39
|
linkml/linkml.github.io
|
https://api.github.com/repos/linkml/linkml.github.io
|
closed
|
New template suggestion
|
documentation good first issue
|
see: https://github.com/linkml/linkml.github.io/blob/master/index.md?plain=1#L92

The main documentation page points to the archived repo https://github.com/linkml/archived-linkml-model-template . Is the new recommended repository https://github.com/linkml/linkml-project-cookiecutter ?
|
1.0
|
New template suggestion - see: https://github.com/linkml/linkml.github.io/blob/master/index.md?plain=1#L92

The main documentation page points to the archived repo https://github.com/linkml/archived-linkml-model-template . Is the new recommended repository https://github.com/linkml/linkml-project-cookiecutter ?
|
non_process
|
new template suggestion see the main documentation page points to the archived repo is the new recommended repository
| 0
|
1,283
| 3,815,142,868
|
IssuesEvent
|
2016-03-28 16:40:18
|
SIMEXP/niak
|
https://api.github.com/repos/SIMEXP/niak
|
closed
|
remove scores template grabbing from niak_grab_fmri_preprocess
|
preprocessing
|
get rid of the template selection for scores output on the grabber
|
1.0
|
remove scores template grabbing from niak_grab_fmri_preprocess - get rid of the template selection for scores output on the grabber
|
process
|
remove scores template grabbing from niak grab fmri preprocess get rid of the template selection for scores output on the grabber
| 1
|
19,088
| 25,135,899,155
|
IssuesEvent
|
2022-11-09 18:33:41
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
NTR: CCA tRNA nucleotidyltransferase
|
RNA processes enzymes in progress dictyBase
|
Hello:
Two enzymes from Dictyostelium described in PMID: 32717856 are characterized as "CCA-adding enzyme". Currently in the GO, this activity is associated with GO:0004810 tRNA adenylyltransferase activity (EC 2.7.7.25). However, a comment in this entry says "This term represents a deleted EC activity and is scheduled for obsoletion".
A new enzymatic activity (EC 2.7.7.72), combining former EC 2.7.7.21 and EC 2.7.7.25 has been described by ExPASy described at:
https://enzyme.expasy.org/EC/2.7.7.72
Could you please create a new GO term according to the guidelines suggested by ExPASy?
Name: CCA tRNA nucleotidyltransferase
Ontology: function
Synonyms:
Definition: A tRNA precursor + 2 CTP + ATP <=> a tRNA with a 3' CCA end + 3 diphosphate
Reference: PMID: 32717856 Unusual Occurrence of Two Bona-Fide CCA-Adding Enzymes in Dictyostelium discoideum
|
1.0
|
NTR: CCA tRNA nucleotidyltransferase - Hello:
Two enzymes from Dictyostelium described in PMID: 32717856 are characterized as "CCA-adding enzyme". Currently in the GO, this activity is associated with GO:0004810 tRNA adenylyltransferase activity (EC 2.7.7.25). However, a comment in this entry says "This term represents a deleted EC activity and is scheduled for obsoletion".
A new enzymatic activity (EC 2.7.7.72), combining former EC 2.7.7.21 and EC 2.7.7.25 has been described by ExPASy described at:
https://enzyme.expasy.org/EC/2.7.7.72
Could you please create a new GO term according to the guidelines suggested by ExPASy?
Name: CCA tRNA nucleotidyltransferase
Ontology: function
Synonyms:
Definition: A tRNA precursor + 2 CTP + ATP <=> a tRNA with a 3' CCA end + 3 diphosphate
Reference: PMID: 32717856 Unusual Occurrence of Two Bona-Fide CCA-Adding Enzymes in Dictyostelium discoideum
|
process
|
ntr cca trna nucleotidyltransferase hello two enzymes from dictyostelium described in pmid are characterized as cca adding enzyme currently in the go this activity is associated with go trna adenylyltransferase activity ec however a comment in this entry says this term represents a deleted ec activity and is scheduled for obsoletion a new enzymatic activity ec combining former ec and ec has been described by expasy described at could you please create a new go term according to the guidelines suggested by expasy name cca trna nucleotidyltransferase ontology function synonyms definition a trna precursor ctp atp a trna with a cca end diphosphate reference pmid unusual occurrence of two bona fide cca adding enzymes in dictyostelium discoideum
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.