Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
16,282 | 20,906,567,346 | IssuesEvent | 2022-03-24 03:19:04 | quark-engine/quark-engine | https://api.github.com/repos/quark-engine/quark-engine | closed | Update the analysis library for Rizin v0.3.0 and above. | work-in-progress issue-processing-state-06 | **Is your feature request related to a problem? Please describe.**
An API change in Rizin [v0.3.0](https://github.com/rizinorg/rizin/releases/tag/v0.3.0) has led the Rizin-based analysis to fail. Currently, Quark is only compatible with Rizin v0.2.0 or v0.2.1. However, such limitation has brought a bad user experience (#305 ).
**Describe the solution you'd like**
Update the Rizin-based library to make Quark analysis work on Rizin v0.3.0 and above. | 1.0 | Update the analysis library for Rizin v0.3.0 and above. - **Is your feature request related to a problem? Please describe.**
An API change in Rizin [v0.3.0](https://github.com/rizinorg/rizin/releases/tag/v0.3.0) has led the Rizin-based analysis to fail. Currently, Quark is only compatible with Rizin v0.2.0 or v0.2.1. However, such limitation has brought a bad user experience (#305 ).
**Describe the solution you'd like**
Update the Rizin-based library to make Quark analysis work on Rizin v0.3.0 and above. | process | update the analysis library for rizin and above is your feature request related to a problem please describe an api change in rizin has led the rizin based analysis to fail currently quark is only compatible with rizin or however such limitation has brought a bad user experience describe the solution you d like update the rizin based library to make quark analysis work on rizin and above | 1 |
21,792 | 30,298,851,816 | IssuesEvent | 2023-07-10 03:05:19 | winter-telescope/mirar | https://api.github.com/repos/winter-telescope/mirar | opened | [BUG] Default ZOGY ZP key does not make sense | bug processors winterready | **Describe the bug**
ZOGY requires a key to tell it what zeropoint to use:
https://github.com/winter-telescope/mirar/blob/93ecacddd58f7814a30d045df4e53325aa7da570/mirar/processors/zogy/zogy.py#L122
It is optional, with a default of `'ZP'`. That does not make sense, because the photcal processor cannot output values with that key. You would need to add an entirely separate processor to modify it in the header first. So, as things stand, I think that running image subtraction with default settings will just fail.
Possible solutions I can think of:
- Make the default `'ZP_AUTO'`
- have no default value, forcing people to choose this when initialising the processor.
- have PhotCal put `'ZP'` into the header somehow | 1.0 | [BUG] Default ZOGY ZP key does not make sense - **Describe the bug**
ZOGY requires a key to tell it what zeropoint to use:
https://github.com/winter-telescope/mirar/blob/93ecacddd58f7814a30d045df4e53325aa7da570/mirar/processors/zogy/zogy.py#L122
It is optional, with a default of `'ZP'`. That does not make sense, because the photcal processor cannot output values with that key. You would need to add an entirely separate processor to modify it in the header first. So, as things stand, I think that running image subtraction with default settings will just fail.
Possible solutions I can think of:
- Make the default `'ZP_AUTO'`
- have no default value, forcing people to choose this when initialising the processor.
- have PhotCal put `'ZP'` into the header somehow | process | default zogy zp key does not make sense describe the bug zogy requires a key to tell it what zeropoint to use it is optional with a default of zp that does not make sense because the photcal processor cannot output values with that key you would need to add an entirely separate processor to modify it in the header first so as things stand i think that running image subtraction with default settings will just fail possible solutions i can think of make the default zp auto have no default value forcing people to choose this when initialising the processor have photcal put zp into the header somehow | 1 |
216,298 | 16,655,638,865 | IssuesEvent | 2021-06-05 13:24:43 | MurdoMaclachlan/oscr | https://api.github.com/repos/MurdoMaclachlan/oscr | closed | Move documentation to readthedocs | documentation | I've done most of this but forget to document it-
IRONIC. | 1.0 | Move documentation to readthedocs - I've done most of this but forget to document it-
IRONIC. | non_process | move documentation to readthedocs i ve done most of this but forget to document it ironic | 0 |
14,791 | 18,065,740,961 | IssuesEvent | 2021-09-20 18:55:37 | esmero/strawberryfield | https://api.github.com/repos/esmero/strawberryfield | closed | Pass SBF Metadata to Final File Path (persistence) alter hook | enhancement JSON Postprocessors Digital Preservation Symfony Services | # What?
Refactor. Refactor. Maybe some altering entity needs to use the "title" of an ADO to define its final saving structure. So we may pass the data to the hook too, in case that is something people desires. (Slack says someone desires this)
I won't pass the actual `Field` object (the strawberry field) because it could end being altered by the altering hook and that my friends is a bad idea? (or is that desired?) Also won't clone the Object because that is a lot of memory wasted (then again passing an array is also a lot of memory?) | 1.0 | Pass SBF Metadata to Final File Path (persistence) alter hook - # What?
Refactor. Refactor. Maybe some altering entity needs to use the "title" of an ADO to define its final saving structure. So we may pass the data to the hook too, in case that is something people desires. (Slack says someone desires this)
I won't pass the actual `Field` object (the strawberry field) because it could end being altered by the altering hook and that my friends is a bad idea? (or is that desired?) Also won't clone the Object because that is a lot of memory wasted (then again passing an array is also a lot of memory?) | process | pass sbf metadata to final file path persistence alter hook what refactor refactor maybe some altering entity needs to use the title of an ado to define its final saving structure so we may pass the data to the hook too in case that is something people desires slack says someone desires this i won t pass the actual field object the strawberry field because it could end being altered by the altering hook and that my friends is a bad idea or is that desired also won t clone the object because that is a lot of memory wasted then again passing an array is also a lot of memory | 1 |
18,389 | 24,522,291,607 | IssuesEvent | 2022-10-11 10:25:53 | vectordotdev/vector | https://api.github.com/repos/vectordotdev/vector | closed | Update the `reduce` transform to take `break.*` options | type: enhancement have: should transform: reduce domain: processing | To safeguard against user error we should add a new `break` options to the `reduce` transform:
- [ ] `break.time_limit_ms` (default: `30000`) is a rename of `expire_after_ms`.
- [ ] `break.event_limit` (default: `1000`) is a new option that will abort if this quantity is exceeded.
- [ ] `break.action` (default: `abort`, `flush`) is a new option that controls what the transform does when either limit is hit.
- If this is set to `abort` then the accumulated event will be aborted and flushed as individual events.
- If this is set to `flush` then the accumulated event will be flushed as one event.
Open to better naming. | 1.0 | Update the `reduce` transform to take `break.*` options - To safeguard against user error we should add a new `break` options to the `reduce` transform:
- [ ] `break.time_limit_ms` (default: `30000`) is a rename of `expire_after_ms`.
- [ ] `break.event_limit` (default: `1000`) is a new option that will abort if this quantity is exceeded.
- [ ] `break.action` (default: `abort`, `flush`) is a new option that controls what the transform does when either limit is hit.
- If this is set to `abort` then the accumulated event will be aborted and flushed as individual events.
- If this is set to `flush` then the accumulated event will be flushed as one event.
Open to better naming. | process | update the reduce transform to take break options to safeguard against user error we should add a new break options to the reduce transform break time limit ms default is a rename of expire after ms break event limit default is a new option that will abort if this quantity is exceeded break action default abort flush is a new option that controls what the transform does when either limit is hit if this is set to abort then the accumulated event will be aborted and flushed as individual events if this is set to flush then the accumulated event will be flushed as one event open to better naming | 1 |
693,497 | 23,778,252,863 | IssuesEvent | 2022-09-01 23:50:40 | Xmetalfanx/linuxSetup | https://api.github.com/repos/Xmetalfanx/linuxSetup | closed | Atom code slimming needs testing | Priority | this is just a reminder to myself to doublecheck my new tweaks work
https://github.com/Xmetalfanx/linuxSetup/commit/74e3c15427a24729a2760fe2710efcb9c00b5fc7 is the commit the changes occurred in | 1.0 | Atom code slimming needs testing - this is just a reminder to myself to doublecheck my new tweaks work
https://github.com/Xmetalfanx/linuxSetup/commit/74e3c15427a24729a2760fe2710efcb9c00b5fc7 is the commit the changes occurred in | non_process | atom code slimming needs testing this is just a reminder to myself to doublecheck my new tweaks work is the commit the changes occurred in | 0 |
9,762 | 12,744,076,381 | IssuesEvent | 2020-06-26 11:46:04 | prisma/prisma-engines | https://api.github.com/repos/prisma/prisma-engines | opened | Comment out Models that do not have a strict unique criteria | process/candidate | I tightened the validation of models and their unique criterias. A unique criteria that spans optional fields is not enough anymore for a model to be valid. We should adapt the introspection accordinly to comment out those models.
[Diff from introspection CI](https://github.com/prisma/introspection-engine-output/commit/6168e39a0271b935e66740b6d79d991832994b51) | 1.0 | Comment out Models that do not have a strict unique criteria - I tightened the validation of models and their unique criterias. A unique criteria that spans optional fields is not enough anymore for a model to be valid. We should adapt the introspection accordinly to comment out those models.
[Diff from introspection CI](https://github.com/prisma/introspection-engine-output/commit/6168e39a0271b935e66740b6d79d991832994b51) | process | comment out models that do not have a strict unique criteria i tightened the validation of models and their unique criterias a unique criteria that spans optional fields is not enough anymore for a model to be valid we should adapt the introspection accordinly to comment out those models | 1 |
361,109 | 10,704,402,292 | IssuesEvent | 2019-10-24 11:39:37 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | Update Header Navigation | Priority: Medium has pr | The structuring of the navigation needs to be updates:
- [x] Turn off `Reporting > Historic FI` (nest it in a new `env` variable)
- [x] Add link `Reporting > IRS`
- [x] Rename `Planning > Manage Plans` to `All Plans`
- [x] Rename `Planning > IRS` to `IRS Plans`
- [x] Add link `Planning > Add New Plan`
cc @rowo @moshthepitt | 1.0 | Update Header Navigation - The structuring of the navigation needs to be updates:
- [x] Turn off `Reporting > Historic FI` (nest it in a new `env` variable)
- [x] Add link `Reporting > IRS`
- [x] Rename `Planning > Manage Plans` to `All Plans`
- [x] Rename `Planning > IRS` to `IRS Plans`
- [x] Add link `Planning > Add New Plan`
cc @rowo @moshthepitt | non_process | update header navigation the structuring of the navigation needs to be updates turn off reporting historic fi nest it in a new env variable add link reporting irs rename planning manage plans to all plans rename planning irs to irs plans add link planning add new plan cc rowo moshthepitt | 0 |
5,391 | 8,214,014,722 | IssuesEvent | 2018-09-04 21:28:21 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Process.Kill ArgumentNullException | area-System.Diagnostics.Process | Receiving exception during Process.Kill as below.
at System.StubHelpers.StubHelpers.SafeHandleAddRef(SafeHandle pHandle, Boolean& success)
at Microsoft.Win32.NativeMethods.TerminateProcess(SafeProcessHandle processHandle, Int32 exitCode)
at System.Diagnostics.Process.Kill()
ReferenceSource shows code differently to CoreFX code on GitHub.
using (SafeProcessHandle handle = GetProcessHandle(Interop.Advapi32.ProcessOptions.PROCESS_TERMINATE))
{
if (!Interop.Kernel32.TerminateProcess(handle, -1))
handle = GetProcessHandle(NativeMethods.PROCESS_TERMINATE);
if (!NativeMethods.TerminateProcess(handle, -1))
Both seem to suffer from no null check on handle | 1.0 | Process.Kill ArgumentNullException - Receiving exception during Process.Kill as below.
at System.StubHelpers.StubHelpers.SafeHandleAddRef(SafeHandle pHandle, Boolean& success)
at Microsoft.Win32.NativeMethods.TerminateProcess(SafeProcessHandle processHandle, Int32 exitCode)
at System.Diagnostics.Process.Kill()
ReferenceSource shows code differently to CoreFX code on GitHub.
using (SafeProcessHandle handle = GetProcessHandle(Interop.Advapi32.ProcessOptions.PROCESS_TERMINATE))
{
if (!Interop.Kernel32.TerminateProcess(handle, -1))
handle = GetProcessHandle(NativeMethods.PROCESS_TERMINATE);
if (!NativeMethods.TerminateProcess(handle, -1))
Both seem to suffer from no null check on handle | process | process kill argumentnullexception receiving exception during process kill as below at system stubhelpers stubhelpers safehandleaddref safehandle phandle boolean success at microsoft nativemethods terminateprocess safeprocesshandle processhandle exitcode at system diagnostics process kill referencesource shows code differently to corefx code on github using safeprocesshandle handle getprocesshandle interop processoptions process terminate if interop terminateprocess handle handle getprocesshandle nativemethods process terminate if nativemethods terminateprocess handle both seem to suffer from no null check on handle | 1 |
20,223 | 26,814,086,444 | IssuesEvent | 2023-02-02 02:00:09 | lizhihao6/get-daily-arxiv-noti | https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti | opened | New submissions for Thu, 2 Feb 23 | event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB | ## Keyword: events
### Transforming CLIP to an Open-vocabulary Video Model via Interpolated Weight Optimization
- **Authors:** Zejia Weng, Xitong Yang, Ang Li, Zuxuan Wu, Yu-Gang Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.00624
- **Pdf link:** https://arxiv.org/pdf/2302.00624
- **Abstract**
Contrastive Language-Image Pretraining (CLIP) has demonstrated impressive zero-shot learning abilities for image understanding, yet limited effort has been made to investigate CLIP for zero-shot video recognition. We introduce Open-VCLIP, a simple yet effective approach that transforms CLIP into strong zero-shot video classifiers that can recognize unseen actions and events at test time. Our framework extends CLIP with minimal modifications to model spatial-temporal relationships in videos, making it a specialized video classifier, while striving for generalization. We formally show that training an Open-VCLIP is equivalent to continual learning with zero historical data. To address this problem, we propose Interpolated Weight Optimization, which utilizes the benefit of weight interpolation in both training and test time. We evaluate our method on three popular and challenging action recognition datasets following various zero-shot evaluation protocols and we demonstrate our approach outperforms state-of-the-art methods by clear margins. In particular, we achieve 87.9%, 58.3%, 81.1% zero-shot accuracy on UCF, HMDB and Kinetics-600 respectively, outperforming state-of-the-art methods by 8.3%, 7.8% and 12.2%.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Multispectral Pedestrian Detection via Reference Box Constrained Cross Attention and Modality Balanced Optimization
- **Authors:** Yinghui Xing, Song Wang, Guoqiang Liang, Qingyi Li, Xiuwei Zhang, Shizhou Zhang, Yanning Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.00290
- **Pdf link:** https://arxiv.org/pdf/2302.00290
- **Abstract**
Multispectral pedestrian detection is an important task for many around-the-clock applications, since the visible and thermal modalities can provide complementary information especially under low light conditions. To reduce the influence of hand-designed components in available multispectral pedestrian detectors, we propose a MultiSpectral pedestrian DEtection TRansformer (MS-DETR), which extends deformable DETR to multi-modal paradigm. In order to facilitate the multi-modal learning process, a Reference box Constrained Cross-Attention (RCCA) module is firstly introduced to the multi-modal Transformer decoder, which takes fusion branch together with the reference boxes as intermediaries to enable the interaction of visible and thermal modalities. To further balance the contribution of different modalities, we design a modality-balanced optimization strategy, which aligns the slots of decoders by adaptively adjusting the instance-level weight of three branches. Our end-to-end MS-DETR shows superior performance on the challenging KAIST and CVC-14 benchmark datasets.
### Do I Have Your Attention: A Large Scale Engagement Prediction Dataset and Baselines
- **Authors:** Monisha Singh, Ximi Hoque, Donghuo Zeng, Yanan Wang, Kazushi Ikeda, Abhinav Dhall
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC)
- **Arxiv link:** https://arxiv.org/abs/2302.00431
- **Pdf link:** https://arxiv.org/pdf/2302.00431
- **Abstract**
The degree of concentration, enthusiasm, optimism, and passion displayed by individual(s) while interacting with a machine is referred to as `user engagement'. Engagement comprises of behavioural, cognitive, and affect related cues. To create engagement predictions systems, which can work in real-world conditions it is quintessential to learn from rich diverse datasets. To this end, a large scale multi-faceted engagement in the wild dataset is proposed. 31 hours duration data of 127 participants representing different illumination conditions is recorded. Thorough experiments are performed exploring applicability of different features action units, eye gaze and head pose and transformers. To further validate the rich nature of the dataset, evaluation is also performed on the EngageWild dataset. The experiments show the usefulness of the proposed dataset. The code, models and dataset will be made publicly available.
### Inching Towards Automated Understanding of the Meaning of Art: An Application to Computational Analysis of Mondrian's Artwork
- **Authors:** Alex Doboli, Mahan Agha Zahedi, Niloofar Gholamrezaei
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2302.00594
- **Pdf link:** https://arxiv.org/pdf/2302.00594
- **Abstract**
Deep Neural Networks (DNNs) have been successfully used in classifying digital images but have been less successful in classifying images with meanings that are not linear combinations of their visualized features, like images of artwork. Moreover, it is unknown what additional features must be included into DNNs, so that they can possibly classify using features beyond visually displayed features, like color, size, and form. Non-displayed features are important in abstract representations, reasoning, and understanding ambiguous expressions, which are arguably topics less studied by current AI methods. This paper attempts to identify capabilities that are related to semantic processing, a current limitation of DNNs. The proposed methodology identifies the missing capabilities by comparing the process of understanding Mondrian's paintings with the process of understanding electronic circuit designs, another creative problem solving instance. The compared entities are cognitive architectures that attempt to loosely mimic cognitive activities. The paper offers a detailed presentation of the characteristics of the architectural components, like goals, concepts, ideas, rules, procedures, beliefs, expectations, and outcomes. To explain the usefulness of the methodology, the paper discusses a new, three-step computational method to distinguish Mondrian's paintings from other artwork. The method includes in a backward order the cognitive architecture's components that operate only with the characteristics of the available data.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Stroke-based Rendering: From Heuristics to Deep Learning
- **Authors:** Florian Nolte, Andrew Melnik, Helge Ritter
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2302.00595
- **Pdf link:** https://arxiv.org/pdf/2302.00595
- **Abstract**
In the last few years, artistic image-making with deep learning models has gained a considerable amount of traction. A large number of these models operate directly in the pixel space and generate raster images. This is however not how most humans would produce artworks, for example, by planning a sequence of shapes and strokes to draw. Recent developments in deep learning methods help to bridge the gap between stroke-based paintings and pixel photo generation. With this survey, we aim to provide a structured introduction and understanding of common challenges and approaches in stroke-based rendering algorithms. These algorithms range from simple rule-based heuristics to stroke optimization and deep reinforcement agents, trained to paint images with differentiable vector graphics and neural rendering.
### ADAPT: Action-aware Driving Caption Transformer
- **Authors:** Bu Jin, Xinyu Liu, Yupeng Zheng, Pengfei Li, Hao Zhao, Tong Zhang, Yuhang Zheng, Guyue Zhou, Jingjing Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC)
- **Arxiv link:** https://arxiv.org/abs/2302.00673
- **Pdf link:** https://arxiv.org/pdf/2302.00673
- **Abstract**
End-to-end autonomous driving has great potential in the transportation industry. However, the lack of transparency and interpretability of the automatic decision-making process hinders its industrial adoption in practice. There have been some early attempts to use attention maps or cost volume for better model explainability which is difficult for ordinary passengers to understand. To bridge the gap, we propose an end-to-end transformer-based architecture, ADAPT (Action-aware Driving cAPtion Transformer), which provides user-friendly natural language narrations and reasoning for each decision making step of autonomous vehicular control and action. ADAPT jointly trains both the driving caption task and the vehicular control prediction task, through a shared video representation. Experiments on BDD-X (Berkeley DeepDrive eXplanation) dataset demonstrate state-of-the-art performance of the ADAPT framework on both automatic metrics and human evaluation. To illustrate the feasibility of the proposed framework in real-world applications, we build a novel deployable system that takes raw car videos as input and outputs the action narrations and reasoning in real time. The code, models and data are available at https://github.com/jxbbb/ADAPT.
## Keyword: raw image
There is no result
| 2.0 | New submissions for Thu, 2 Feb 23 - ## Keyword: events
### Transforming CLIP to an Open-vocabulary Video Model via Interpolated Weight Optimization
- **Authors:** Zejia Weng, Xitong Yang, Ang Li, Zuxuan Wu, Yu-Gang Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.00624
- **Pdf link:** https://arxiv.org/pdf/2302.00624
- **Abstract**
Contrastive Language-Image Pretraining (CLIP) has demonstrated impressive zero-shot learning abilities for image understanding, yet limited effort has been made to investigate CLIP for zero-shot video recognition. We introduce Open-VCLIP, a simple yet effective approach that transforms CLIP into strong zero-shot video classifiers that can recognize unseen actions and events at test time. Our framework extends CLIP with minimal modifications to model spatial-temporal relationships in videos, making it a specialized video classifier, while striving for generalization. We formally show that training an Open-VCLIP is equivalent to continual learning with zero historical data. To address this problem, we propose Interpolated Weight Optimization, which utilizes the benefit of weight interpolation in both training and test time. We evaluate our method on three popular and challenging action recognition datasets following various zero-shot evaluation protocols and we demonstrate our approach outperforms state-of-the-art methods by clear margins. In particular, we achieve 87.9%, 58.3%, 81.1% zero-shot accuracy on UCF, HMDB and Kinetics-600 respectively, outperforming state-of-the-art methods by 8.3%, 7.8% and 12.2%.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Multispectral Pedestrian Detection via Reference Box Constrained Cross Attention and Modality Balanced Optimization
- **Authors:** Yinghui Xing, Song Wang, Guoqiang Liang, Qingyi Li, Xiuwei Zhang, Shizhou Zhang, Yanning Zhang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2302.00290
- **Pdf link:** https://arxiv.org/pdf/2302.00290
- **Abstract**
Multispectral pedestrian detection is an important task for many around-the-clock applications, since the visible and thermal modalities can provide complementary information especially under low light conditions. To reduce the influence of hand-designed components in available multispectral pedestrian detectors, we propose a MultiSpectral pedestrian DEtection TRansformer (MS-DETR), which extends deformable DETR to multi-modal paradigm. In order to facilitate the multi-modal learning process, a Reference box Constrained Cross-Attention (RCCA) module is firstly introduced to the multi-modal Transformer decoder, which takes fusion branch together with the reference boxes as intermediaries to enable the interaction of visible and thermal modalities. To further balance the contribution of different modalities, we design a modality-balanced optimization strategy, which aligns the slots of decoders by adaptively adjusting the instance-level weight of three branches. Our end-to-end MS-DETR shows superior performance on the challenging KAIST and CVC-14 benchmark datasets.
### Do I Have Your Attention: A Large Scale Engagement Prediction Dataset and Baselines
- **Authors:** Monisha Singh, Ximi Hoque, Donghuo Zeng, Yanan Wang, Kazushi Ikeda, Abhinav Dhall
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC)
- **Arxiv link:** https://arxiv.org/abs/2302.00431
- **Pdf link:** https://arxiv.org/pdf/2302.00431
- **Abstract**
The degree of concentration, enthusiasm, optimism, and passion displayed by individual(s) while interacting with a machine is referred to as `user engagement'. Engagement comprises of behavioural, cognitive, and affect related cues. To create engagement predictions systems, which can work in real-world conditions it is quintessential to learn from rich diverse datasets. To this end, a large scale multi-faceted engagement in the wild dataset is proposed. 31 hours duration data of 127 participants representing different illumination conditions is recorded. Thorough experiments are performed exploring applicability of different features action units, eye gaze and head pose and transformers. To further validate the rich nature of the dataset, evaluation is also performed on the EngageWild dataset. The experiments show the usefulness of the proposed dataset. The code, models and dataset will be made publicly available.
### Inching Towards Automated Understanding of the Meaning of Art: An Application to Computational Analysis of Mondrian's Artwork
- **Authors:** Alex Doboli, Mahan Agha Zahedi, Niloofar Gholamrezaei
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2302.00594
- **Pdf link:** https://arxiv.org/pdf/2302.00594
- **Abstract**
Deep Neural Networks (DNNs) have been successfully used in classifying digital images but have been less successful in classifying images with meanings that are not linear combinations of their visualized features, like images of artwork. Moreover, it is unknown what additional features must be included into DNNs, so that they can possibly classify using features beyond visually displayed features, like color, size, and form. Non-displayed features are important in abstract representations, reasoning, and understanding ambiguous expressions, which are arguably topics less studied by current AI methods. This paper attempts to identify capabilities that are related to semantic processing, a current limitation of DNNs. The proposed methodology identifies the missing capabilities by comparing the process of understanding Mondrian's paintings with the process of understanding electronic circuit designs, another creative problem solving instance. The compared entities are cognitive architectures that attempt to loosely mimic cognitive activities. The paper offers a detailed presentation of the characteristics of the architectural components, like goals, concepts, ideas, rules, procedures, beliefs, expectations, and outcomes. To explain the usefulness of the methodology, the paper discusses a new, three-step computational method to distinguish Mondrian's paintings from other artwork. The method includes in a backward order the cognitive architecture's components that operate only with the characteristics of the available data.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Stroke-based Rendering: From Heuristics to Deep Learning
- **Authors:** Florian Nolte, Andrew Melnik, Helge Ritter
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2302.00595
- **Pdf link:** https://arxiv.org/pdf/2302.00595
- **Abstract**
In the last few years, artistic image-making with deep learning models has gained a considerable amount of traction. A large number of these models operate directly in the pixel space and generate raster images. This is however not how most humans would produce artworks, for example, by planning a sequence of shapes and strokes to draw. Recent developments in deep learning methods help to bridge the gap between stroke-based paintings and pixel photo generation. With this survey, we aim to provide a structured introduction and understanding of common challenges and approaches in stroke-based rendering algorithms. These algorithms range from simple rule-based heuristics to stroke optimization and deep reinforcement agents, trained to paint images with differentiable vector graphics and neural rendering.
### ADAPT: Action-aware Driving Caption Transformer
- **Authors:** Bu Jin, Xinyu Liu, Yupeng Zheng, Pengfei Li, Hao Zhao, Tong Zhang, Yuhang Zheng, Guyue Zhou, Jingjing Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC)
- **Arxiv link:** https://arxiv.org/abs/2302.00673
- **Pdf link:** https://arxiv.org/pdf/2302.00673
- **Abstract**
End-to-end autonomous driving has great potential in the transportation industry. However, the lack of transparency and interpretability of the automatic decision-making process hinders its industrial adoption in practice. There have been some early attempts to use attention maps or cost volume for better model explainability which is difficult for ordinary passengers to understand. To bridge the gap, we propose an end-to-end transformer-based architecture, ADAPT (Action-aware Driving cAPtion Transformer), which provides user-friendly natural language narrations and reasoning for each decision making step of autonomous vehicular control and action. ADAPT jointly trains both the driving caption task and the vehicular control prediction task, through a shared video representation. Experiments on BDD-X (Berkeley DeepDrive eXplanation) dataset demonstrate state-of-the-art performance of the ADAPT framework on both automatic metrics and human evaluation. To illustrate the feasibility of the proposed framework in real-world applications, we build a novel deployable system that takes raw car videos as input and outputs the action narrations and reasoning in real time. The code, models and data are available at https://github.com/jxbbb/ADAPT.
## Keyword: raw image
There is no result
| process | new submissions for thu feb keyword events transforming clip to an open vocabulary video model via interpolated weight optimization authors zejia weng xitong yang ang li zuxuan wu yu gang jiang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract contrastive language image pretraining clip has demonstrated impressive zero shot learning abilities for image understanding yet limited effort has been made to investigate clip for zero shot video recognition we introduce open vclip a simple yet effective approach that transforms clip into strong zero shot video classifiers that can recognize unseen actions and events at test time our framework extends clip with minimal modifications to model spatial temporal relationships in videos making it a specialized video classifier while striving for generalization we formally show that training an open vclip is equivalent to continual learning with zero historical data to address this problem we propose interpolated weight optimization which utilizes the benefit of weight interpolation in both training and test time we evaluate our method on three popular and challenging action recognition datasets following various zero shot evaluation protocols and we demonstrate our approach outperforms state of the art methods by clear margins in particular we achieve zero shot accuracy on ucf hmdb and kinetics respectively outperforming state of the art methods by and keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp multispectral pedestrian detection via reference box constrained cross attention and modality balanced optimization authors yinghui xing song wang guoqiang liang qingyi li xiuwei zhang shizhou zhang yanning zhang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract multispectral pedestrian detection is an important task for many around the clock applications since the visible and thermal modalities can provide complementary information especially under low light conditions to reduce the influence of hand designed components in available multispectral pedestrian detectors we propose a multispectral pedestrian detection transformer ms detr which extends deformable detr to multi modal paradigm in order to facilitate the multi modal learning process a reference box constrained cross attention rcca module is firstly introduced to the multi modal transformer decoder which takes fusion branch together with the reference boxes as intermediaries to enable the interaction of visible and thermal modalities to further balance the contribution of different modalities we design a modality balanced optimization strategy which aligns the slots of decoders by adaptively adjusting the instance level weight of three branches our end to end ms detr shows superior performance on the challenging kaist and cvc benchmark datasets do i have your attention a large scale engagement prediction dataset and baselines authors monisha singh ximi hoque donghuo zeng yanan wang kazushi ikeda abhinav dhall subjects computer vision and pattern recognition cs cv human computer interaction cs hc arxiv link pdf link abstract the degree of concentration enthusiasm optimism and passion displayed by individual s while interacting with a machine is referred to as user engagement engagement comprises of behavioural cognitive and affect related cues to create engagement predictions systems which can work in real world conditions it is quintessential to learn from rich diverse datasets to this end a large scale multi faceted engagement in the wild dataset is proposed hours duration data of participants representing different illumination conditions is recorded thorough experiments are performed exploring applicability of different features action units eye gaze and head pose and transformers to further validate the rich nature of the dataset evaluation is also performed on the engagewild dataset the experiments show the usefulness of the proposed dataset the code models and dataset will be made publicly available inching towards automated understanding of the meaning of art an application to computational analysis of mondrian s artwork authors alex doboli mahan agha zahedi niloofar gholamrezaei subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract deep neural networks dnns have been successfully used in classifying digital images but have been less successful in classifying images with meanings that are not linear combinations of their visualized features like images of artwork moreover it is unknown what additional features must be included into dnns so that they can possibly classify using features beyond visually displayed features like color size and form non displayed features are important in abstract representations reasoning and understanding ambiguous expressions which are arguably topics less studied by current ai methods this paper attempts to identify capabilities that are related to semantic processing a current limitation of dnns the proposed methodology identifies the missing capabilities by comparing the process of understanding mondrian s paintings with the process of understanding electronic circuit designs another creative problem solving instance the compared entities are cognitive architectures that attempt to loosely mimic cognitive activities the paper offers a detailed presentation of the characteristics of the architectural components like goals concepts ideas rules procedures beliefs expectations and outcomes to explain the usefulness of the methodology the paper discusses a new three step computational method to distinguish mondrian s paintings from other artwork the method includes in a backward order the cognitive architecture s components that operate only with the characteristics of the available data keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw stroke based rendering from heuristics to deep learning authors florian nolte andrew melnik helge ritter subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract in the last few years artistic image making with deep learning models has gained a considerable amount of traction a large number of these models operate directly in the pixel space and generate raster images this is however not how most humans would produce artworks for example by planning a sequence of shapes and strokes to draw recent developments in deep learning methods help to bridge the gap between stroke based paintings and pixel photo generation with this survey we aim to provide a structured introduction and understanding of common challenges and approaches in stroke based rendering algorithms these algorithms range from simple rule based heuristics to stroke optimization and deep reinforcement agents trained to paint images with differentiable vector graphics and neural rendering adapt action aware driving caption transformer authors bu jin xinyu liu yupeng zheng pengfei li hao zhao tong zhang yuhang zheng guyue zhou jingjing liu subjects computer vision and pattern recognition cs cv human computer interaction cs hc arxiv link pdf link abstract end to end autonomous driving has great potential in the transportation industry however the lack of transparency and interpretability of the automatic decision making process hinders its industrial adoption in practice there have been some early attempts to use attention maps or cost volume for better model explainability which is difficult for ordinary passengers to understand to bridge the gap we propose an end to end transformer based architecture adapt action aware driving caption transformer which provides user friendly natural language narrations and reasoning for each decision making step of autonomous vehicular control and action adapt jointly trains both the driving caption task and the vehicular control prediction task through a shared video representation experiments on bdd x berkeley deepdrive explanation dataset demonstrate state of the art performance of the adapt framework on both automatic metrics and human evaluation to illustrate the feasibility of the proposed framework in real world applications we build a novel deployable system that takes raw car videos as input and outputs the action narrations and reasoning in real time the code models and data are available at keyword raw image there is no result | 1 |
26,832 | 4,243,592,184 | IssuesEvent | 2016-07-06 23:49:54 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | opened | Build Failure: org.elasticsearch.snapshots.FsBlobStoreRepositoryIT.testMultipleSnapshotAndRollback | test v5.0.0 | I was unable to reproduce this, but I'm not running Windows.
Reproduce With:
gradle :core:integTest -Dtests.seed=94DC6FDB9849F3FD -Dtests.class=org.elasticsearch.snapshots.FsBlobStoreRepositoryIT -Dtests.method="testMultipleSnapshotAndRollback" -Dtests.es.logger.level=DEBUG -Dtests.assertion.disabled=org.elasticsearch -Dtests.security.manager=true -Dtests.nightly=false -Dtests.heap.size=762m -Dtests.jvm.argline="-server -XX:+UseConcMarkSweepGC -XX:-UseCompressedOops -XX:+AggressiveOpts" -Dtests.locale=cs -Dtests.timezone=America/Fort_Wayne
Build Failure:
http://build-us-00.elastic.co/job/es_core_master_windows-2012-r2/3485/testReport/junit/org.elasticsearch.snapshots/FsBlobStoreRepositoryIT/testMultipleSnapshotAndRollback/ | 1.0 | Build Failure: org.elasticsearch.snapshots.FsBlobStoreRepositoryIT.testMultipleSnapshotAndRollback - I was unable to reproduce this, but I'm not running Windows.
Reproduce With:
gradle :core:integTest -Dtests.seed=94DC6FDB9849F3FD -Dtests.class=org.elasticsearch.snapshots.FsBlobStoreRepositoryIT -Dtests.method="testMultipleSnapshotAndRollback" -Dtests.es.logger.level=DEBUG -Dtests.assertion.disabled=org.elasticsearch -Dtests.security.manager=true -Dtests.nightly=false -Dtests.heap.size=762m -Dtests.jvm.argline="-server -XX:+UseConcMarkSweepGC -XX:-UseCompressedOops -XX:+AggressiveOpts" -Dtests.locale=cs -Dtests.timezone=America/Fort_Wayne
Build Failure:
http://build-us-00.elastic.co/job/es_core_master_windows-2012-r2/3485/testReport/junit/org.elasticsearch.snapshots/FsBlobStoreRepositoryIT/testMultipleSnapshotAndRollback/ | non_process | build failure org elasticsearch snapshots fsblobstorerepositoryit testmultiplesnapshotandrollback i was unable to reproduce this but i m not running windows reproduce with gradle core integtest dtests seed dtests class org elasticsearch snapshots fsblobstorerepositoryit dtests method testmultiplesnapshotandrollback dtests es logger level debug dtests assertion disabled org elasticsearch dtests security manager true dtests nightly false dtests heap size dtests jvm argline server xx useconcmarksweepgc xx usecompressedoops xx aggressiveopts dtests locale cs dtests timezone america fort wayne build failure | 0 |
3,205 | 6,262,551,155 | IssuesEvent | 2017-07-15 11:54:36 | coala/teams | https://api.github.com/repos/coala/teams | closed | Aspects Team Member Application: Pratyush Prakash | process/approved |
# Bio
My name is Pratyush Prakash and I am studying Information Technology at NITK, Surathkal. I am 19 years old and technology is my passion. I <3 everything sci-fi and Formula 1.
# coala Contributions so far
My main contributions have been at coala core. I have already solved two issues related to cEP-0005. Apart from this I have also solved various other issues and have even written a Bear (PyromaBear). Apart from writing code I have of course done reviews and helped newcomers out on the gitter channel.
# Road to the Future
I wish to have aspects implemented in every bear and for the new configuration format involving tastes and aspects to be the De-facto method of configuring coala. The end goal is to have coala available and accessible to all, not only devs. I feel this is the first step towards making coala more accessible as it avoids the hassle of getting to know the bears. The users should be abstracted from the bears as it is not very intuitive to find every setting. Another functionality would be to be able to group results based on what aspects they analyze. It would make the work of future bear writers easier, if there are already bears which provide the same functionality. Eventually this will help coala take take over the world.
# Specific Responsibilities
The first step would be to implement aspect support in coala. I would also like to assist in converting the bears to the aspect model. Apart from this I would be continuously adding new aspects and corresponding tastes for the bear writers to use. | 1.0 | Aspects Team Member Application: Pratyush Prakash -
# Bio
My name is Pratyush Prakash and I am studying Information Technology at NITK, Surathkal. I am 19 years old and technology is my passion. I <3 everything sci-fi and Formula 1.
# coala Contributions so far
My main contributions have been at coala core. I have already solved two issues related to cEP-0005. Apart from this I have also solved various other issues and have even written a Bear (PyromaBear). Apart from writing code I have of course done reviews and helped newcomers out on the gitter channel.
# Road to the Future
I wish to have aspects implemented in every bear and for the new configuration format involving tastes and aspects to be the De-facto method of configuring coala. The end goal is to have coala available and accessible to all, not only devs. I feel this is the first step towards making coala more accessible as it avoids the hassle of getting to know the bears. The users should be abstracted from the bears as it is not very intuitive to find every setting. Another functionality would be to be able to group results based on what aspects they analyze. It would make the work of future bear writers easier, if there are already bears which provide the same functionality. Eventually this will help coala take take over the world.
# Specific Responsibilities
The first step would be to implement aspect support in coala. I would also like to assist in converting the bears to the aspect model. Apart from this I would be continuously adding new aspects and corresponding tastes for the bear writers to use. | process | aspects team member application pratyush prakash bio my name is pratyush prakash and i am studying information technology at nitk surathkal i am years old and technology is my passion i everything sci fi and formula coala contributions so far my main contributions have been at coala core i have already solved two issues related to cep apart from this i have also solved various other issues and have even written a bear pyromabear apart from writing code i have of course done reviews and helped newcomers out on the gitter channel road to the future i wish to have aspects implemented in every bear and for the new configuration format involving tastes and aspects to be the de facto method of configuring coala the end goal is to have coala available and accessible to all not only devs i feel this is the first step towards making coala more accessible as it avoids the hassle of getting to know the bears the users should be abstracted from the bears as it is not very intuitive to find every setting another functionality would be to be able to group results based on what aspects they analyze it would make the work of future bear writers easier if there are already bears which provide the same functionality eventually this will help coala take take over the world specific responsibilities the first step would be to implement aspect support in coala i would also like to assist in converting the bears to the aspect model apart from this i would be continuously adding new aspects and corresponding tastes for the bear writers to use | 1 |
9,102 | 12,178,652,184 | IssuesEvent | 2020-04-28 09:21:26 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Processing modeler error when trying to comment an algorithm | Bug Processing | I am trying to add a comment to an algorithm in the modeler (latest master). A right click on an algorithm offers the menu item "Add comment", but an exception is raised when clicking on it:
```
AttributeError: 'ModelerParametersDialog' object has no attribute 'widget'
Traceback (most recent call last):
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerGraphicItem.py", line 164, in editComment
self.edit(edit_comment=True)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerGraphicItem.py", line 135, in edit
self.component().configuration())
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerParametersDialog.py", line 79, in __init__
self.widget = ModelerParametersWidget(alg, model, algName, configuration, context=self.context, dialog=self)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerParametersDialog.py", line 500, in __init__
self.widget = ModelerParametersPanelWidget(alg, model, algName, configuration, dialog, context)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerParametersDialog.py", line 160, in __init__
self.setupUi()
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerParametersDialog.py", line 225, in setupUi
wrapper = WidgetWrapperFactory.create_wrapper(param, self.dialog)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/gui/wrappers.py", line 1847, in create_wrapper
return WidgetWrapperFactory.create_wrapper_from_metadata(param, dialog, row, col)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/gui/wrappers.py", line 1882, in create_wrapper_from_metadata
wrapper = wrapper(param, dialog, row, col, **params)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/gui/wrappers.py", line 155, in __init__
self.widget = self.createWidget(**kwargs)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/algs/gdal/ui/RasterOptionsWidget.py", line 38, in createWidget
strings = self.dialog.getAvailableValuesOfType(QgsProcessingParameterString, QgsProcessingOutputString)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerParametersDialog.py", line 111, in getAvailableValuesOfType
return self.widget.getAvailableValuesOfType(paramType, outTypes, dataTypes)
AttributeError: 'ModelerParametersDialog' object has no attribute 'widget'
```
The model in question can be downloaded from https://raw.githubusercontent.com/jvdkwast/PyQGIS_Hydro/master/models/accessible_wells.model3 | 1.0 | Processing modeler error when trying to comment an algorithm - I am trying to add a comment to an algorithm in the modeler (latest master). A right click on an algorithm offers the menu item "Add comment", but an exception is raised when clicking on it:
```
AttributeError: 'ModelerParametersDialog' object has no attribute 'widget'
Traceback (most recent call last):
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerGraphicItem.py", line 164, in editComment
self.edit(edit_comment=True)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerGraphicItem.py", line 135, in edit
self.component().configuration())
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerParametersDialog.py", line 79, in __init__
self.widget = ModelerParametersWidget(alg, model, algName, configuration, context=self.context, dialog=self)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerParametersDialog.py", line 500, in __init__
self.widget = ModelerParametersPanelWidget(alg, model, algName, configuration, dialog, context)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerParametersDialog.py", line 160, in __init__
self.setupUi()
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerParametersDialog.py", line 225, in setupUi
wrapper = WidgetWrapperFactory.create_wrapper(param, self.dialog)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/gui/wrappers.py", line 1847, in create_wrapper
return WidgetWrapperFactory.create_wrapper_from_metadata(param, dialog, row, col)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/gui/wrappers.py", line 1882, in create_wrapper_from_metadata
wrapper = wrapper(param, dialog, row, col, **params)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/gui/wrappers.py", line 155, in __init__
self.widget = self.createWidget(**kwargs)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/algs/gdal/ui/RasterOptionsWidget.py", line 38, in createWidget
strings = self.dialog.getAvailableValuesOfType(QgsProcessingParameterString, QgsProcessingOutputString)
File "/home/mkuhn/dev/qgis/build-QGIS-Desktop-Debug/output/python/plugins/processing/modeler/ModelerParametersDialog.py", line 111, in getAvailableValuesOfType
return self.widget.getAvailableValuesOfType(paramType, outTypes, dataTypes)
AttributeError: 'ModelerParametersDialog' object has no attribute 'widget'
```
The model in question can be downloaded from https://raw.githubusercontent.com/jvdkwast/PyQGIS_Hydro/master/models/accessible_wells.model3 | process | processing modeler error when trying to comment an algorithm i am trying to add a comment to an algorithm in the modeler latest master a right click on an algorithm offers the menu item add comment but an exception is raised when clicking on it attributeerror modelerparametersdialog object has no attribute widget traceback most recent call last file home mkuhn dev qgis build qgis desktop debug output python plugins processing modeler modelergraphicitem py line in editcomment self edit edit comment true file home mkuhn dev qgis build qgis desktop debug output python plugins processing modeler modelergraphicitem py line in edit self component configuration file home mkuhn dev qgis build qgis desktop debug output python plugins processing modeler modelerparametersdialog py line in init self widget modelerparameterswidget alg model algname configuration context self context dialog self file home mkuhn dev qgis build qgis desktop debug output python plugins processing modeler modelerparametersdialog py line in init self widget modelerparameterspanelwidget alg model algname configuration dialog context file home mkuhn dev qgis build qgis desktop debug output python plugins processing modeler modelerparametersdialog py line in init self setupui file home mkuhn dev qgis build qgis desktop debug output python plugins processing modeler modelerparametersdialog py line in setupui wrapper widgetwrapperfactory create wrapper param self dialog file home mkuhn dev qgis build qgis desktop debug output python plugins processing gui wrappers py line in create wrapper return widgetwrapperfactory create wrapper from metadata param dialog row col file home mkuhn dev qgis build qgis desktop debug output python plugins processing gui wrappers py line in create wrapper from metadata wrapper wrapper param dialog row col params file home mkuhn dev qgis build qgis desktop debug output python plugins processing gui wrappers py line in init self widget self createwidget kwargs file home mkuhn dev qgis build qgis desktop debug output python plugins processing algs gdal ui rasteroptionswidget py line in createwidget strings self dialog getavailablevaluesoftype qgsprocessingparameterstring qgsprocessingoutputstring file home mkuhn dev qgis build qgis desktop debug output python plugins processing modeler modelerparametersdialog py line in getavailablevaluesoftype return self widget getavailablevaluesoftype paramtype outtypes datatypes attributeerror modelerparametersdialog object has no attribute widget the model in question can be downloaded from | 1 |
128,433 | 17,535,742,220 | IssuesEvent | 2021-08-12 06:08:19 | hackforla/website | https://api.github.com/repos/hackforla/website | opened | Complete refactor/standardization of all buttons within HfLA website | role: front end Size: Medium Feature: Design system | ### Overview
Complete refactor for buttons from the recommendations in the CSS standardization sheets within the resources below.
### Action Items
- [ ] Make the 20 recommended edits for the buttons from the spreadsheet below (10 files will be updated)
- [ ] Confirm after the recommended edits that the `Action` of each button works as intended
- [ ] Create a Pull Request with the proposed changes to the button
### Resources/Instructions
[Google Sheets - CSS Standardization Buttons](https://docs.google.com/spreadsheets/d/1n3va-1kwGOCTP4JaXpL3TjgvS_2RWDkurb8PyCJSp-0/edit?usp=sharing)
#1921
| 1.0 | Complete refactor/standardization of all buttons within HfLA website - ### Overview
Complete refactor for buttons from the recommendations in the CSS standardization sheets within the resources below.
### Action Items
- [ ] Make the 20 recommended edits for the buttons from the spreadsheet below (10 files will be updated)
- [ ] Confirm after the recommended edits that the `Action` of each button works as intended
- [ ] Create a Pull Request with the proposed changes to the button
### Resources/Instructions
[Google Sheets - CSS Standardization Buttons](https://docs.google.com/spreadsheets/d/1n3va-1kwGOCTP4JaXpL3TjgvS_2RWDkurb8PyCJSp-0/edit?usp=sharing)
#1921
| non_process | complete refactor standardization of all buttons within hfla website overview complete refactor for buttons from the recommendations in the css standardization sheets within the resources below action items make the recommended edits for the buttons from the spreadsheet below files will be updated confirm after the recommended edits that the action of each button works as intended create a pull request with the proposed changes to the button resources instructions | 0 |
22,045 | 30,568,002,552 | IssuesEvent | 2023-07-20 19:25:06 | memphisdev/memphis | https://api.github.com/repos/memphisdev/memphis | closed | Inline processing | epic: Stream Processing In Roadmap | ## **Summary**
Embed functions and code in a serverless runtime into stations to enable stream processing on-the-fly.
## **Context**
One of Memphis's core features and milestones. Enabling true real-time stream processing within the broker itself using any language, any code made by the community, or users themselves instead of placing heavy business logic in the consumer/producer level. Stream processing usually takes place by stitching a broker with Apache Flink and similar apps, turning real-time to near real-time or slower. Memphis will provide a all three components under the same hood: Ingesting, transforming, and processing
## **Value**
True real-time, stream processing using any language, any type of code or app.
## **Persona(s)**
- Data Engineers
- Developers | 1.0 | Inline processing - ## **Summary**
Embed functions and code in a serverless runtime into stations to enable stream processing on-the-fly.
## **Context**
One of Memphis's core features and milestones. Enabling true real-time stream processing within the broker itself using any language, any code made by the community, or users themselves instead of placing heavy business logic in the consumer/producer level. Stream processing usually takes place by stitching a broker with Apache Flink and similar apps, turning real-time to near real-time or slower. Memphis will provide a all three components under the same hood: Ingesting, transforming, and processing
## **Value**
True real-time, stream processing using any language, any type of code or app.
## **Persona(s)**
- Data Engineers
- Developers | process | inline processing summary embed functions and code in a serverless runtime into stations to enable stream processing on the fly context one of memphis s core features and milestones enabling true real time stream processing within the broker itself using any language any code made by the community or users themselves instead of placing heavy business logic in the consumer producer level stream processing usually takes place by stitching a broker with apache flink and similar apps turning real time to near real time or slower memphis will provide a all three components under the same hood ingesting transforming and processing value true real time stream processing using any language any type of code or app persona s data engineers developers | 1 |
124,093 | 12,224,591,796 | IssuesEvent | 2020-05-02 23:33:32 | wxcapture/wxcapture | https://api.github.com/repos/wxcapture/wxcapture | opened | Configurable website relative to the document root | documentation enhancement | The current code assumes that the website is in the wxcapture folder which is inside the website document root folder.
There is currently very limited and incomplete support for this via the "Link Base" value in config.py which is used by schedule_passes.py.
This support needs to be extended, including any code that produces / modifies .html files.
This will also need to be updated in the installation documentation. | 1.0 | Configurable website relative to the document root - The current code assumes that the website is in the wxcapture folder which is inside the website document root folder.
There is currently very limited and incomplete support for this via the "Link Base" value in config.py which is used by schedule_passes.py.
This support needs to be extended, including any code that produces / modifies .html files.
This will also need to be updated in the installation documentation. | non_process | configurable website relative to the document root the current code assumes that the website is in the wxcapture folder which is inside the website document root folder there is currently very limited and incomplete support for this via the link base value in config py which is used by schedule passes py this support needs to be extended including any code that produces modifies html files this will also need to be updated in the installation documentation | 0 |
11,345 | 14,168,390,525 | IssuesEvent | 2020-11-12 11:39:53 | AdrianArnaiz/Brain-MRI-Autoencoder | https://api.github.com/repos/AdrianArnaiz/Brain-MRI-Autoencoder | closed | Data Exploration - Relevant Slice Selection | data-preprocessing | * **Select view** of volume slices: sagittal, axial or coronal.
* **Select slices** with relevant information: n-fixed, outer software or OpenCV code. | 1.0 | Data Exploration - Relevant Slice Selection - * **Select view** of volume slices: sagittal, axial or coronal.
* **Select slices** with relevant information: n-fixed, outer software or OpenCV code. | process | data exploration relevant slice selection select view of volume slices sagittal axial or coronal select slices with relevant information n fixed outer software or opencv code | 1 |
119,976 | 15,684,824,494 | IssuesEvent | 2021-03-25 10:27:27 | emory-libraries/blacklight-catalog | https://api.github.com/repos/emory-libraries/blacklight-catalog | opened | Refine wireframe for Tools Menu on Single Item View Display | UI Design View (Display and Navigation) | As the product owner, I would like refinements made to the Tools Menu that currently appears in the Single Item View Display in order to improve the overall functionality and make the feature more user friendly.
The Tools Menu in its final form will include the following functions (listed in order as they should appear)
- Bookmark Item (with a check box the user will select to bookmark the item. Currently listed as "bookmark" but believe there should be a differentiation between bookmark the item and the Bookmarks in the header. )
- Cite (metadata requirements coming from Emily Porter reference ticket [#344](https://app.zenhub.com/workspaces/blacklight-catalog-5f5f84a8a6d29939a0bc5d78/issues/emory-libraries/blacklight-catalog/344))
- Print (currently does not exist but is a requirement defined by stakeholders)
- Help (this will eventually link to a static help page but will initially just include the label)
- Feedback (this will eventually link to a feedback form but will initially just include the label)
- Staff View (currently listed as Librarian View)
Acceptance Criteria:
- [ ] Refine the existing wireframe for the Tools Menu based on requirements above
- [ ] Annotate wireframe based on above requirements for future development
- [ ] Provide CSS Markup of Tools Menu based on styling in Lux with dark blue header and gold text (see screen shot below)

| 1.0 | Refine wireframe for Tools Menu on Single Item View Display - As the product owner, I would like refinements made to the Tools Menu that currently appears in the Single Item View Display in order to improve the overall functionality and make the feature more user friendly.
The Tools Menu in its final form will include the following functions (listed in order as they should appear)
- Bookmark Item (with a check box the user will select to bookmark the item. Currently listed as "bookmark" but believe there should be a differentiation between bookmark the item and the Bookmarks in the header. )
- Cite (metadata requirements coming from Emily Porter reference ticket [#344](https://app.zenhub.com/workspaces/blacklight-catalog-5f5f84a8a6d29939a0bc5d78/issues/emory-libraries/blacklight-catalog/344))
- Print (currently does not exist but is a requirement defined by stakeholders)
- Help (this will eventually link to a static help page but will initially just include the label)
- Feedback (this will eventually link to a feedback form but will initially just include the label)
- Staff View (currently listed as Librarian View)
Acceptance Criteria:
- [ ] Refine the existing wireframe for the Tools Menu based on requirements above
- [ ] Annotate wireframe based on above requirements for future development
- [ ] Provide CSS Markup of Tools Menu based on styling in Lux with dark blue header and gold text (see screen shot below)

| non_process | refine wireframe for tools menu on single item view display as the product owner i would like refinements made to the tools menu that currently appears in the single item view display in order to improve the overall functionality and make the feature more user friendly the tools menu in its final form will include the following functions listed in order as they should appear bookmark item with a check box the user will select to bookmark the item currently listed as bookmark but believe there should be a differentiation between bookmark the item and the bookmarks in the header cite metadata requirements coming from emily porter reference ticket print currently does not exist but is a requirement defined by stakeholders help this will eventually link to a static help page but will initially just include the label feedback this will eventually link to a feedback form but will initially just include the label staff view currently listed as librarian view acceptance criteria refine the existing wireframe for the tools menu based on requirements above annotate wireframe based on above requirements for future development provide css markup of tools menu based on styling in lux with dark blue header and gold text see screen shot below | 0 |
298,692 | 25,847,988,955 | IssuesEvent | 2022-12-13 08:20:09 | vishal-testgh20221021/testgh | https://api.github.com/repos/vishal-testgh20221021/testgh | opened | Test Failed - Check first name and last name - Step 1 | testcollab |
h3. Test Case Details
*Test case title*: Check first name and last name
*Test plan title*: Test Plan added from Cypress 20221213134941 *Steps*:
|S.No.|Step|Expected Result|Status|Comment|
|-|-|-|-|-|
|*1*|Enter special characters / numerals in first name field and submit form with all other details filled up correctly|An error message pertaining to incorrect first name should be shown and focus should be on first name field|Fail| |
|2|Enter special characters / numerals in last name field and submit form with all other details filled up correctly|An error message pertaining to incorrect last name should be shown and focus should be on last name field|Not executed| |
|3|Enter valid first name , last name and submit form with all other details filled up correctly|The form should be submitted with no errors and user should be redirected to next step of signup|Not executed| | | 1.0 | Test Failed - Check first name and last name - Step 1 -
h3. Test Case Details
*Test case title*: Check first name and last name
*Test plan title*: Test Plan added from Cypress 20221213134941 *Steps*:
|S.No.|Step|Expected Result|Status|Comment|
|-|-|-|-|-|
|*1*|Enter special characters / numerals in first name field and submit form with all other details filled up correctly|An error message pertaining to incorrect first name should be shown and focus should be on first name field|Fail| |
|2|Enter special characters / numerals in last name field and submit form with all other details filled up correctly|An error message pertaining to incorrect last name should be shown and focus should be on last name field|Not executed| |
|3|Enter valid first name , last name and submit form with all other details filled up correctly|The form should be submitted with no errors and user should be redirected to next step of signup|Not executed| | | non_process | test failed check first name and last name step test case details test case title check first name and last name test plan title test plan added from cypress steps s no step expected result status comment enter special characters numerals in first name field and submit form with all other details filled up correctly an error message pertaining to incorrect first name should be shown and focus should be on first name field fail enter special characters numerals in last name field and submit form with all other details filled up correctly an error message pertaining to incorrect last name should be shown and focus should be on last name field not executed enter valid first name last name and submit form with all other details filled up correctly the form should be submitted with no errors and user should be redirected to next step of signup not executed | 0 |
8,930 | 12,037,553,081 | IssuesEvent | 2020-04-13 22:06:50 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | Remove a VM from an Environment - Not up to date for Windows machine? | Pri2 cba devops-cicd-process/tech devops/prod doc-bug | `./configure.sh remove` doesn't seem to work on a Windows machine. Instead you have to run `./configure.cmd remove` (and then supply your token)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 91d0d31f-81ee-c024-db7e-daddbf525f71
* Version Independent ID: 330f1649-386c-d0aa-5f96-b8343a1480d3
* Content: [Environment - Virtual machine resource - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-virtual-machines?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/environments-virtual-machines.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/environments-virtual-machines.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | Remove a VM from an Environment - Not up to date for Windows machine? - `./configure.sh remove` doesn't seem to work on a Windows machine. Instead you have to run `./configure.cmd remove` (and then supply your token)
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 91d0d31f-81ee-c024-db7e-daddbf525f71
* Version Independent ID: 330f1649-386c-d0aa-5f96-b8343a1480d3
* Content: [Environment - Virtual machine resource - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments-virtual-machines?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/environments-virtual-machines.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/environments-virtual-machines.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | process | remove a vm from an environment not up to date for windows machine configure sh remove doesn t seem to work on a windows machine instead you have to run configure cmd remove and then supply your token document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 1 |
34,841 | 14,531,005,064 | IssuesEvent | 2020-12-14 20:07:11 | microsoft/BotFramework-Services | https://api.github.com/repos/microsoft/BotFramework-Services | closed | DirectLine Enhanced Authentication->getsessionid results in 400 BadRequest and Without DirectLine Enhanced Authentication->Giving magic code, instead of automatic login | Bot Services customer-replied-to customer-reported | I am facing few strange issues.
1. I have enabled DirectLine Enhanced Authentication -->I added trusted origin as well. But if I try to hit using webchat I am getting 400 error and bot not able to connect even.
2. I have disabled DirectLine Enhanced Authentication --> My bot is working at least but still getting magic code, instead of auto login.
I need your assistance on How to get rid of magic code? This really creates huge impact and users completely dislike MSBot and wanna move out to different vendors.
It would be great help, if we could address this at the earliest
| 1.0 | DirectLine Enhanced Authentication->getsessionid results in 400 BadRequest and Without DirectLine Enhanced Authentication->Giving magic code, instead of automatic login - I am facing few strange issues.
1. I have enabled DirectLine Enhanced Authentication -->I added trusted origin as well. But if I try to hit using webchat I am getting 400 error and bot not able to connect even.
2. I have disabled DirectLine Enhanced Authentication --> My bot is working at least but still getting magic code, instead of auto login.
I need your assistance on How to get rid of magic code? This really creates huge impact and users completely dislike MSBot and wanna move out to different vendors.
It would be great help, if we could address this at the earliest
| non_process | directline enhanced authentication getsessionid results in badrequest and without directline enhanced authentication giving magic code instead of automatic login i am facing few strange issues i have enabled directline enhanced authentication i added trusted origin as well but if i try to hit using webchat i am getting error and bot not able to connect even i have disabled directline enhanced authentication my bot is working at least but still getting magic code instead of auto login i need your assistance on how to get rid of magic code this really creates huge impact and users completely dislike msbot and wanna move out to different vendors it would be great help if we could address this at the earliest | 0 |
6,185 | 9,102,019,130 | IssuesEvent | 2019-02-20 12:43:21 | linnovate/root | https://api.github.com/repos/linnovate/root | opened | document templates inheritance from office | 2.0.6 Process bug | after associating Template Document to Office (with members )
and the user (of the office) update to the Template Document only after refresh.
-the office with one user

-the Template Document before refresh (user not appears)

the member added is only with viewer permissions when he has higher permissions | 1.0 | document templates inheritance from office - after associating Template Document to Office (with members )
and the user (of the office) update to the Template Document only after refresh.
-the office with one user

-the Template Document before refresh (user not appears)

the member added is only with viewer permissions when he has higher permissions | process | document templates inheritance from office after associating template document to office with members and the user of the office update to the template document only after refresh the office with one user the template document before refresh user not appears the member added is only with viewer permissions when he has higher permissions | 1 |
9,034 | 12,130,107,877 | IssuesEvent | 2020-04-23 00:30:39 | GoogleCloudPlatform/python-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples | closed | remove gcp-devrel-py-tools from appengine/standard/iap/requirements-test.txt | priority: p2 remove-gcp-devrel-py-tools type: process | remove gcp-devrel-py-tools from appengine/standard/iap/requirements-test.txt | 1.0 | remove gcp-devrel-py-tools from appengine/standard/iap/requirements-test.txt - remove gcp-devrel-py-tools from appengine/standard/iap/requirements-test.txt | process | remove gcp devrel py tools from appengine standard iap requirements test txt remove gcp devrel py tools from appengine standard iap requirements test txt | 1 |
645,525 | 21,007,336,734 | IssuesEvent | 2022-03-30 00:44:33 | honeydipper/honeydipper | https://api.github.com/repos/honeydipper/honeydipper | closed | migrate charts | priority: p3 discussion chore | Since we're self-hosting the chart, we should run https://github.com/helm/chart-testing in CI if we're not already.
The incubator chart is being phased out. Looking at https://github.com/helm/hub/blob/master/Repositories.md, we're already in https://github.com/helm/hub/blob/2d965b985ac824284334fc00f18f3f0bfd87a545/config/repo-values.yaml#L205-L206 (though we may want to update these values)
If possible, let's also consider moving the chart back to this repo from https://github.com/honeydipper/honeydipper-charts, which may make it easier to keep up to date (auto-updating versions, etc.) and switching branch from master -> main. | 1.0 | migrate charts - Since we're self-hosting the chart, we should run https://github.com/helm/chart-testing in CI if we're not already.
The incubator chart is being phased out. Looking at https://github.com/helm/hub/blob/master/Repositories.md, we're already in https://github.com/helm/hub/blob/2d965b985ac824284334fc00f18f3f0bfd87a545/config/repo-values.yaml#L205-L206 (though we may want to update these values)
If possible, let's also consider moving the chart back to this repo from https://github.com/honeydipper/honeydipper-charts, which may make it easier to keep up to date (auto-updating versions, etc.) and switching branch from master -> main. | non_process | migrate charts since we re self hosting the chart we should run in ci if we re not already the incubator chart is being phased out looking at we re already in though we may want to update these values if possible let s also consider moving the chart back to this repo from which may make it easier to keep up to date auto updating versions etc and switching branch from master main | 0 |
4,428 | 7,307,027,134 | IssuesEvent | 2018-02-28 00:36:41 | P2Poker/P2Poker | https://api.github.com/repos/P2Poker/P2Poker | opened | As a developer, I need a system of build/test configurations for various parts of the desired software | c) dev origin d) release 0.1 e) dev tools f) priority 2 g) change request h) in process j) difficult workaround l) minor completion cost l) no ux impact n) no impact n) no users affected o) as a developer … p) triage completed | Create a series of pre-determined build/test configurations, which can be expanded in the future as needed. | 1.0 | As a developer, I need a system of build/test configurations for various parts of the desired software - Create a series of pre-determined build/test configurations, which can be expanded in the future as needed. | process | as a developer i need a system of build test configurations for various parts of the desired software create a series of pre determined build test configurations which can be expanded in the future as needed | 1 |
71,574 | 13,685,645,767 | IssuesEvent | 2020-09-30 07:29:22 | gautamkrishnar/socli | https://api.github.com/repos/gautamkrishnar/socli | opened | Add sentry for error logging | Hacktoberfest enhancement leapcode up-for-grabs | Add sentry to **socli**:
Install the dependency:
```bash
pip install --upgrade sentry-sdk
```
Call it on the socli.py
```python
import sentry_sdk
sentry_sdk.init(
"https://95c4106659044cbda2ea0fe499f4be7e@o323465.ingest.sentry.io/5445901",
traces_sample_rate=0.5
)
```
Make sure that this code will be run only when the user installed the package via pip, not during the development when someone runs `python -m socli`. | 1.0 | Add sentry for error logging - Add sentry to **socli**:
Install the dependency:
```bash
pip install --upgrade sentry-sdk
```
Call it on the socli.py
```python
import sentry_sdk
sentry_sdk.init(
"https://95c4106659044cbda2ea0fe499f4be7e@o323465.ingest.sentry.io/5445901",
traces_sample_rate=0.5
)
```
Make sure that this code will be run only when the user installed the package via pip, not during the development when someone runs `python -m socli`. | non_process | add sentry for error logging add sentry to socli install the dependency bash pip install upgrade sentry sdk call it on the socli py python import sentry sdk sentry sdk init traces sample rate make sure that this code will be run only when the user installed the package via pip not during the development when someone runs python m socli | 0 |
21,514 | 29,800,903,632 | IssuesEvent | 2023-06-16 08:02:06 | googleapis/google-cloud-go | https://api.github.com/repos/googleapis/google-cloud-go | opened | Your .repo-metadata.json files have a problem 🤒 | type: process repo-metadata: lint | You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* api_shortname 'errorreporting' invalid in internal/.repo-metadata-full.json
* api_shortname 'longrunning' invalid in internal/.repo-metadata-full.json
* api_shortname 'profiler' invalid in internal/.repo-metadata-full.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json files have a problem 🤒 - You have a problem with your .repo-metadata.json files:
Result of scan 📈:
* api_shortname 'errorreporting' invalid in internal/.repo-metadata-full.json
* api_shortname 'longrunning' invalid in internal/.repo-metadata-full.json
* api_shortname 'profiler' invalid in internal/.repo-metadata-full.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json files have a problem 🤒 you have a problem with your repo metadata json files result of scan 📈 api shortname errorreporting invalid in internal repo metadata full json api shortname longrunning invalid in internal repo metadata full json api shortname profiler invalid in internal repo metadata full json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions | 1 |
13,594 | 16,165,232,940 | IssuesEvent | 2021-05-01 10:54:17 | ooi-data/CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record | https://api.github.com/repos/ooi-data/CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record | opened | 🛑 Processing failed: OSError | process | ## Overview
`OSError` found in `processing_task` task during run ended on 2021-05-01T10:54:17.295221.
## Details
Flow name: `CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record`
Task name: `processing_task`
Error type: `OSError`
Error message: [Errno 16] Please reduce your request rate.
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 233, in _call_s3
out = await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (SlowDown) when calling the DeleteObjects operation (reached max retries: 4): Please reduce your request rate.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 311, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1151, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 72, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 53, in sync
raise result[0]
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 20, in _runner
result[0] = await coro
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1510, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1498, in _bulk_delete
await self._call_s3("delete_objects", kwargs, Bucket=bucket, Delete=delete_keys)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err)
OSError: [Errno 16] Please reduce your request rate.
```
</details>
| 1.0 | 🛑 Processing failed: OSError - ## Overview
`OSError` found in `processing_task` task during run ended on 2021-05-01T10:54:17.295221.
## Details
Flow name: `CE04OSPS-SF01B-3A-FLORTD104-streamed-flort_d_data_record`
Task name: `processing_task`
Error type: `OSError`
Error message: [Errno 16] Please reduce your request rate.
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 233, in _call_s3
out = await method(**additional_kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (SlowDown) when calling the DeleteObjects operation (reached max retries: 4): Please reduce your request rate.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/share/miniconda/envs/harvester/lib/python3.8/site-packages/ooi_harvester/processor/pipeline.py", line 71, in processing_task
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/ooi_harvester/processor/__init__.py", line 311, in finalize_zarr
source_store.fs.delete(source_store.root, recursive=True)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/spec.py", line 1151, in delete
return self.rm(path, recursive=recursive, maxdepth=maxdepth)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 72, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 53, in sync
raise result[0]
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/fsspec/asyn.py", line 20, in _runner
result[0] = await coro
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1510, in _rm
await asyncio.gather(
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 1498, in _bulk_delete
await self._call_s3("delete_objects", kwargs, Bucket=bucket, Delete=delete_keys)
File "/srv/conda/envs/notebook/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err)
OSError: [Errno 16] Please reduce your request rate.
```
</details>
| process | 🛑 processing failed oserror overview oserror found in processing task task during run ended on details flow name streamed flort d data record task name processing task error type oserror error message please reduce your request rate traceback traceback most recent call last file srv conda envs notebook lib site packages core py line in call out await method additional kwargs file srv conda envs notebook lib site packages aiobotocore client py line in make api call raise error class parsed response operation name botocore exceptions clienterror an error occurred slowdown when calling the deleteobjects operation reached max retries please reduce your request rate the above exception was the direct cause of the following exception traceback most recent call last file usr share miniconda envs harvester lib site packages ooi harvester processor pipeline py line in processing task file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize zarr source store fs delete source store root recursive true file srv conda envs notebook lib site packages fsspec spec py line in delete return self rm path recursive recursive maxdepth maxdepth file srv conda envs notebook lib site packages fsspec asyn py line in wrapper return sync self loop func args kwargs file srv conda envs notebook lib site packages fsspec asyn py line in sync raise result file srv conda envs notebook lib site packages fsspec asyn py line in runner result await coro file srv conda envs notebook lib site packages core py line in rm await asyncio gather file srv conda envs notebook lib site packages core py line in bulk delete await self call delete objects kwargs bucket bucket delete delete keys file srv conda envs notebook lib site packages core py line in call raise translate boto error err oserror please reduce your request rate | 1 |
4,577 | 7,404,328,660 | IssuesEvent | 2018-03-20 04:00:53 | nyu-software-engineering/mood-music-player | https://api.github.com/repos/nyu-software-engineering/mood-music-player | closed | Pull music analysis from Gracenote Developer | 2 - in process task | - [x] import API
- [x] make sure everything runs smoothly | 1.0 | Pull music analysis from Gracenote Developer - - [x] import API
- [x] make sure everything runs smoothly | process | pull music analysis from gracenote developer import api make sure everything runs smoothly | 1 |
22,201 | 30,758,028,263 | IssuesEvent | 2023-07-29 10:11:47 | bitfocus/companion-module-requests | https://api.github.com/repos/bitfocus/companion-module-requests | opened | Thor RF11iQ PDU | NOT YET PROCESSED | - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
Thor RF11iQ PDU
What you would like to be able to make it do from Companion:
Switch outlets with feedback
Direct links or attachments to the ethernet control protocol or API:
https://www.thortechnologies.com.au/product/thor-smartbrain-rf11iq/ | 1.0 | Thor RF11iQ PDU - - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested**
The name of the device, hardware, or software you would like to control:
Thor RF11iQ PDU
What you would like to be able to make it do from Companion:
Switch outlets with feedback
Direct links or attachments to the ethernet control protocol or API:
https://www.thortechnologies.com.au/product/thor-smartbrain-rf11iq/ | process | thor pdu i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control thor pdu what you would like to be able to make it do from companion switch outlets with feedback direct links or attachments to the ethernet control protocol or api | 1 |
5,259 | 8,052,238,504 | IssuesEvent | 2018-08-01 18:38:59 | HumanCellAtlas/dcp-community | https://api.github.com/repos/HumanCellAtlas/dcp-community | opened | Updating a project charter | charter-process | Once there is more experience with the charter process, we should review the scenarios where updating a charter is recommended or required. In practice, the scenarios tend to include:
* A project has completed the specific milestones in its original charter and proposes to add additional milestones.
* A project is unable to complete the work outlined in its charter.
* The charter assumptions have been modified by experience.
Depending on the outcome, we can assess how **lightweight** the process should be. For example, adopting a process similar to [Kubernetes](https://github.com/kubernetes/community/tree/master/committee-steering/governance#steps-to-update-an-existing-sig-charter) would allow a project lead to approve charter updates that are internal in scope.
Changes to `charter/README.md`:
1. Rename _What the process is_ to _Creating a project charter_
2. Add _Updating a project charter_
| 1.0 | Updating a project charter - Once there is more experience with the charter process, we should review the scenarios where updating a charter is recommended or required. In practice, the scenarios tend to include:
* A project has completed the specific milestones in its original charter and proposes to add additional milestones.
* A project is unable to complete the work outlined in its charter.
* The charter assumptions have been modified by experience.
Depending on the outcome, we can assess how **lightweight** the process should be. For example, adopting a process similar to [Kubernetes](https://github.com/kubernetes/community/tree/master/committee-steering/governance#steps-to-update-an-existing-sig-charter) would allow a project lead to approve charter updates that are internal in scope.
Changes to `charter/README.md`:
1. Rename _What the process is_ to _Creating a project charter_
2. Add _Updating a project charter_
| process | updating a project charter once there is more experience with the charter process we should review the scenarios where updating a charter is recommended or required in practice the scenarios tend to include a project has completed the specific milestones in its original charter and proposes to add additional milestones a project is unable to complete the work outlined in its charter the charter assumptions have been modified by experience depending on the outcome we can assess how lightweight the process should be for example adopting a process similar to would allow a project lead to approve charter updates that are internal in scope changes to charter readme md rename what the process is to creating a project charter add updating a project charter | 1 |
83,207 | 24,005,912,641 | IssuesEvent | 2022-09-14 14:47:43 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | [wasm-mt] Issues with enabling threads in .NET 7 RC 1 | arch-wasm area-Build-mono | Using the `wasm-experimental` workload it shoudl be possible to create multi-threaded `wasmbrowser` projects in .NET 7 RC1
Issues:
- [x] First group - fixed on `main` by https://github.com/dotnet/runtime/pull/75162 and on `release/7.0` by https://github.com/dotnet/runtime/pull/75171
- `WasmGenerateAppBundle` doesn't create multi-threaded builds with `WasmEnableThreads` - only `WasmBuildNative` does - due to `dotnet.worker.js` missing from the runtime pack
- [`WorkloadManifest.targets.in`](https://github.com/dotnet/runtime/blob/536f34d9ab88e153aa11329e789afe6975fbe9a9/src/mono/nuget/Microsoft.NET.Workload.Mono.Toolchain.Manifest/WorkloadManifest.targets.in#L92) uses `WasmEnableThreading` instead of `WasmEnableThreads`
- The CoreLibs in the `multithread` and `perftrace` build variants don't define the correct feature flags.
- `dotnet new wasmbrowser` creates `browser.csproj`, not `<dirname>.csproj`
- [x] ~`dotnet new wasmbrowser` does not create `runtimeconfig.template.json` required for `dotnet run`~ this looks like a local caching issue
- [x] https://github.com/dotnet/runtime/issues/75263
## Detailed reproduction steps
1. Download a net7 RC 1 nightly tar.gz from [dotnet/installer](https://github.com/dotnet/installer) (I used the "Release/7.0.1xx-rc1 (7.0.x Runtime)" column)
2. Unpack into `${HOME}/work/net7-nightly`
3. Set `DOTNET_ROOT` and `PATH`:
```console
export DOTNET_ROOT="${HOME}/work/net7-nightly"
export PATH="${DOTNET_ROOT}:${PATH}"
```
4. Create the directory `${HOME}/work/net7-playground` and add the following into `${HOME}/work/net7-playground/NuGet.config`
```xml
<configuration>
<packageSources>
<add key="dotnet7" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet7/nuget/v3/index.json" />
</packageSources>
</configuration>
```
4. From the `net7-playground` directory (so that `NuGet.config` is in the current dir), install the `wasm-experimental` workload:
```console
cd ~/work/net7-playground
dotnet workload install wasm-experimental
```
5. Make a sample project directory `${HOME}/work/net7-playground/hithread` and create a `wasmbrowser` project:
```console
cd ~/work/net7-playground
mkdir hithread
cd hithread
dotnet new wasmbrowser
```
6. **Bug 1** note that there is `browser.csproj` not `hithread.csproj` - other templates create a .csproj file with the same name as the directory.
6. **Bug 2** create the missing `runtimeconfig.template.json` with teh following content
```json
{
"wasmHostProperties": {
"perHostConfig": [
{
"name": "browser",
"html-path": "index.html",
"Host": "browser"
}
]
}
}
```
7. Add the follwoing to your `browser.csproj`
```xml
<PropertyGroup>
<WasmEnableThreads>true</WasmEnableThreads>
<WasmEnableThreading>true</WasmEnableThreading>
<WasmBuildNative>true</WasmBuildNative>
</PropertyGroup>
```
**Bug 3** note that [`WorkloadManifest.targets.in`](https://github.com/dotnet/runtime/blob/536f34d9ab88e153aa11329e789afe6975fbe9a9/src/mono/nuget/Microsoft.NET.Workload.Mono.Toolchain.Manifest/WorkloadManifest.targets.in#L92) has a typo and uses `WasmEnableThreading` instead of `WasmEnableThreads`
8. Run `dotnet build`
```console
$ dotnet build
MSBuild version 17.4.0-preview-22416-02+5d102ae37 for .NET
Determining projects to restore...
Restored /Users/alklig/work/net7-playground/hithread/browser.csproj (in 2 ms).
/Users/alklig/work/net7-nightly/sdk/7.0.100-rc.2.22425.5/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.RuntimeIdentifierInference.targets(219,5): message NETSDK1057: You are using a preview version of .NET. See: https://aka.ms/dotnet-support-policy [/Users/alklig/work/net7-playground/hithread/browser.csproj]
browser -> /Users/alklig/work/net7-playground/hithread/bin/Debug/net7.0/browser-wasm/browser.dll
Compiling native assets with emcc with -O0. This may take a while ...
[2/3] corebindings.c -> corebindings.o [took 0.384s]
[1/3] pinvoke.c -> pinvoke.o [took 0.384s]
[3/3] driver.c -> driver.o [took 0.416s]
Linking for initial memory $(EmccInitialHeapSize)=536870912 bytes. Set this msbuild property to change the value.
Linking with emcc with -O0. This may take a while ...
...
Exception: FROZEN_CACHE is set, but cache file is missing: "sysroot/lib/wasm32-emscripten/libGL-mt.a" (in cache root path "/Users/alklig/work/net7-nightly/packs/Microsoft.NET.Runtime.Emscripten.3.1.12.Sdk.osx-x64/8.0.0-alpha.1.22415.5/tools/emscripten/cache")
/Users/alklig/work/net7-nightly/packs/Microsoft.NET.Runtime.WebAssembly.Sdk/7.0.0-rc.1.22422.12/Sdk/WasmApp.Native.targets(422,5): error MSB3073: The command "emcc "@/Users/alklig/work/net7-nightly/packs/Microsoft.NETCore.App.Runtime.Mono.multithread.browser-wasm/7.0.0-rc.1.22422.12/runtimes/browser-wasm/native/src/emcc-default.rsp" "@/Users/alklig/work/net7-nightly/packs/Microsoft.NETCore.App.Runtime.Mono.multithread.browser-wasm/7.0.0-rc.1.22422.12/runtimes/browser-wasm/native/src/emcc-link.rsp" "@/Users/alklig/work/net7-playground/hithread/obj/Debug/net7.0/browser-wasm/wasm/for-build/emcc-link.rsp"" exited with code 1. [/Users/alklig/work/net7-playground/hithread/browser.csproj]
```
**Bug 4** We [removed the `*-mt.a` static libraries](https://github.com/dotnet/emsdk/pull/43) in our EMSDK pack
9. Copy `*-mt.a` from an upstream EMSDK install into the emsdk pack:
```console
cp ~/work/dotnet-runtime/runtime/src/mono/wasm/emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/*-mt.a ~/work/net7-nightly/packs/Microsoft.NET.Runtime.Emscripten.3.1.12.Sdk.osx-x64/8.0.0-alpha.1.22415.5/tools/emscripten/cache/sysroot/lib/wasm32-emscripten
```
10. `dotnet build` should now succeed.
`dotnet run` shoudl serve the app. Open the URL in Chrome and open dev tools and hit reload. You should see something like this in the console:
```console
MONO_WASM: worker initializing essential C exports and APIs
MONO_WASM: worker initializing essential C exports and APIs
MONO_WASM: worker initializing essential C exports and APIs
MONO_WASM: worker initializing essential C exports and APIs
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: worker initializing essential C exports and APIs
dotnet.js:5 MONO_WASM: worker initializing essential C exports and APIs
dotnet.js:5 MONO_WASM: worker initializing essential C exports and APIs
dotnet.js:5 MONO_WASM: worker initializing essential C exports and APIs
dotnet.js:3 mono_wasm_runtime_ready fe00e07a-5519-4dfe-b35a-f867dbaf2e28
dotnet.js:2013 Hello, Console!
```
11. Clean the `bin` and `obj` directories.
Change the `browser.csproj` like this:
```xml
<PropertyGroup>
<WasmEnableThreads>true</WasmEnableThreads>
<WasmEnableThreading>true</WasmEnableThreading>
<WasmGenerateAppBundle>true</WasmGenerateAppBundle>
</PropertyGroup>
```
That is, replace `WasmBuildNative` by `WasmGenerateAppBundle`
12. run `dotnet build` again. Note that `dotnet.worker.js` isn't in the AppBundle directory. When running
the app prints 404s when trying to load it. From the Chrome DevTools console:
```console
GET http://127.0.0.1:9000/dotnet.worker.js 404 (Not Found)
dotnet.worker.js:1 GET http://127.0.0.1:9000/dotnet.worker.js 404 (Not Found)
dotnet.worker.js:1 GET http://127.0.0.1:9000/dotnet.worker.js 404 (Not Found)
dotnet.worker.js:1 GET http://127.0.0.1:9000/dotnet.worker.js 404 (Not Found)
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
```
| 1.0 | [wasm-mt] Issues with enabling threads in .NET 7 RC 1 - Using the `wasm-experimental` workload it shoudl be possible to create multi-threaded `wasmbrowser` projects in .NET 7 RC1
Issues:
- [x] First group - fixed on `main` by https://github.com/dotnet/runtime/pull/75162 and on `release/7.0` by https://github.com/dotnet/runtime/pull/75171
- `WasmGenerateAppBundle` doesn't create multi-threaded builds with `WasmEnableThreads` - only `WasmBuildNative` does - due to `dotnet.worker.js` missing from the runtime pack
- [`WorkloadManifest.targets.in`](https://github.com/dotnet/runtime/blob/536f34d9ab88e153aa11329e789afe6975fbe9a9/src/mono/nuget/Microsoft.NET.Workload.Mono.Toolchain.Manifest/WorkloadManifest.targets.in#L92) uses `WasmEnableThreading` instead of `WasmEnableThreads`
- The CoreLibs in the `multithread` and `perftrace` build variants don't define the correct feature flags.
- `dotnet new wasmbrowser` creates `browser.csproj`, not `<dirname>.csproj`
- [x] ~`dotnet new wasmbrowser` does not create `runtimeconfig.template.json` required for `dotnet run`~ this looks like a local caching issue
- [x] https://github.com/dotnet/runtime/issues/75263
## Detailed reproduction steps
1. Download a net7 RC 1 nightly tar.gz from [dotnet/installer](https://github.com/dotnet/installer) (I used the "Release/7.0.1xx-rc1 (7.0.x Runtime)" column)
2. Unpack into `${HOME}/work/net7-nightly`
3. Set `DOTNET_ROOT` and `PATH`:
```console
export DOTNET_ROOT="${HOME}/work/net7-nightly"
export PATH="${DOTNET_ROOT}:${PATH}"
```
4. Create the directory `${HOME}/work/net7-playground` and add the following into `${HOME}/work/net7-playground/NuGet.config`
```xml
<configuration>
<packageSources>
<add key="dotnet7" value="https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet7/nuget/v3/index.json" />
</packageSources>
</configuration>
```
4. From the `net7-playground` directory (so that `NuGet.config` is in the current dir), install the `wasm-experimental` workload:
```console
cd ~/work/net7-playground
dotnet workload install wasm-experimental
```
5. Make a sample project directory `${HOME}/work/net7-playground/hithread` and create a `wasmbrowser` project:
```console
cd ~/work/net7-playground
mkdir hithread
cd hithread
dotnet new wasmbrowser
```
6. **Bug 1** note that there is `browser.csproj` not `hithread.csproj` - other templates create a .csproj file with the same name as the directory.
6. **Bug 2** create the missing `runtimeconfig.template.json` with teh following content
```json
{
"wasmHostProperties": {
"perHostConfig": [
{
"name": "browser",
"html-path": "index.html",
"Host": "browser"
}
]
}
}
```
7. Add the follwoing to your `browser.csproj`
```xml
<PropertyGroup>
<WasmEnableThreads>true</WasmEnableThreads>
<WasmEnableThreading>true</WasmEnableThreading>
<WasmBuildNative>true</WasmBuildNative>
</PropertyGroup>
```
**Bug 3** note that [`WorkloadManifest.targets.in`](https://github.com/dotnet/runtime/blob/536f34d9ab88e153aa11329e789afe6975fbe9a9/src/mono/nuget/Microsoft.NET.Workload.Mono.Toolchain.Manifest/WorkloadManifest.targets.in#L92) has a typo and uses `WasmEnableThreading` instead of `WasmEnableThreads`
8. Run `dotnet build`
```console
$ dotnet build
MSBuild version 17.4.0-preview-22416-02+5d102ae37 for .NET
Determining projects to restore...
Restored /Users/alklig/work/net7-playground/hithread/browser.csproj (in 2 ms).
/Users/alklig/work/net7-nightly/sdk/7.0.100-rc.2.22425.5/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.RuntimeIdentifierInference.targets(219,5): message NETSDK1057: You are using a preview version of .NET. See: https://aka.ms/dotnet-support-policy [/Users/alklig/work/net7-playground/hithread/browser.csproj]
browser -> /Users/alklig/work/net7-playground/hithread/bin/Debug/net7.0/browser-wasm/browser.dll
Compiling native assets with emcc with -O0. This may take a while ...
[2/3] corebindings.c -> corebindings.o [took 0.384s]
[1/3] pinvoke.c -> pinvoke.o [took 0.384s]
[3/3] driver.c -> driver.o [took 0.416s]
Linking for initial memory $(EmccInitialHeapSize)=536870912 bytes. Set this msbuild property to change the value.
Linking with emcc with -O0. This may take a while ...
...
Exception: FROZEN_CACHE is set, but cache file is missing: "sysroot/lib/wasm32-emscripten/libGL-mt.a" (in cache root path "/Users/alklig/work/net7-nightly/packs/Microsoft.NET.Runtime.Emscripten.3.1.12.Sdk.osx-x64/8.0.0-alpha.1.22415.5/tools/emscripten/cache")
/Users/alklig/work/net7-nightly/packs/Microsoft.NET.Runtime.WebAssembly.Sdk/7.0.0-rc.1.22422.12/Sdk/WasmApp.Native.targets(422,5): error MSB3073: The command "emcc "@/Users/alklig/work/net7-nightly/packs/Microsoft.NETCore.App.Runtime.Mono.multithread.browser-wasm/7.0.0-rc.1.22422.12/runtimes/browser-wasm/native/src/emcc-default.rsp" "@/Users/alklig/work/net7-nightly/packs/Microsoft.NETCore.App.Runtime.Mono.multithread.browser-wasm/7.0.0-rc.1.22422.12/runtimes/browser-wasm/native/src/emcc-link.rsp" "@/Users/alklig/work/net7-playground/hithread/obj/Debug/net7.0/browser-wasm/wasm/for-build/emcc-link.rsp"" exited with code 1. [/Users/alklig/work/net7-playground/hithread/browser.csproj]
```
**Bug 4** We [removed the `*-mt.a` static libraries](https://github.com/dotnet/emsdk/pull/43) in our EMSDK pack
9. Copy `*-mt.a` from an upstream EMSDK install into the emsdk pack:
```console
cp ~/work/dotnet-runtime/runtime/src/mono/wasm/emsdk/upstream/emscripten/cache/sysroot/lib/wasm32-emscripten/*-mt.a ~/work/net7-nightly/packs/Microsoft.NET.Runtime.Emscripten.3.1.12.Sdk.osx-x64/8.0.0-alpha.1.22415.5/tools/emscripten/cache/sysroot/lib/wasm32-emscripten
```
10. `dotnet build` should now succeed.
`dotnet run` shoudl serve the app. Open the URL in Chrome and open dev tools and hit reload. You should see something like this in the console:
```console
MONO_WASM: worker initializing essential C exports and APIs
MONO_WASM: worker initializing essential C exports and APIs
MONO_WASM: worker initializing essential C exports and APIs
MONO_WASM: worker initializing essential C exports and APIs
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: worker initializing essential C exports and APIs
dotnet.js:5 MONO_WASM: worker initializing essential C exports and APIs
dotnet.js:5 MONO_WASM: worker initializing essential C exports and APIs
dotnet.js:5 MONO_WASM: worker initializing essential C exports and APIs
dotnet.js:3 mono_wasm_runtime_ready fe00e07a-5519-4dfe-b35a-f867dbaf2e28
dotnet.js:2013 Hello, Console!
```
11. Clean the `bin` and `obj` directories.
Change the `browser.csproj` like this:
```xml
<PropertyGroup>
<WasmEnableThreads>true</WasmEnableThreads>
<WasmEnableThreading>true</WasmEnableThreading>
<WasmGenerateAppBundle>true</WasmGenerateAppBundle>
</PropertyGroup>
```
That is, replace `WasmBuildNative` by `WasmGenerateAppBundle`
12. run `dotnet build` again. Note that `dotnet.worker.js` isn't in the AppBundle directory. When running
the app prints 404s when trying to load it. From the Chrome DevTools console:
```console
GET http://127.0.0.1:9000/dotnet.worker.js 404 (Not Found)
dotnet.worker.js:1 GET http://127.0.0.1:9000/dotnet.worker.js 404 (Not Found)
dotnet.worker.js:1 GET http://127.0.0.1:9000/dotnet.worker.js 404 (Not Found)
dotnet.worker.js:1 GET http://127.0.0.1:9000/dotnet.worker.js 404 (Not Found)
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
dotnet.js:5 MONO_WASM: afterLoadWasmModuleToWorker added message event handler Worker {onmessage: ƒ, onerror: ƒ}
```
| non_process | issues with enabling threads in net rc using the wasm experimental workload it shoudl be possible to create multi threaded wasmbrowser projects in net issues first group fixed on main by and on release by wasmgenerateappbundle doesn t create multi threaded builds with wasmenablethreads only wasmbuildnative does due to dotnet worker js missing from the runtime pack uses wasmenablethreading instead of wasmenablethreads the corelibs in the multithread and perftrace build variants don t define the correct feature flags dotnet new wasmbrowser creates browser csproj not csproj dotnet new wasmbrowser does not create runtimeconfig template json required for dotnet run this looks like a local caching issue detailed reproduction steps download a rc nightly tar gz from i used the release x runtime column unpack into home work nightly set dotnet root and path console export dotnet root home work nightly export path dotnet root path create the directory home work playground and add the following into home work playground nuget config xml from the playground directory so that nuget config is in the current dir install the wasm experimental workload console cd work playground dotnet workload install wasm experimental make a sample project directory home work playground hithread and create a wasmbrowser project console cd work playground mkdir hithread cd hithread dotnet new wasmbrowser bug note that there is browser csproj not hithread csproj other templates create a csproj file with the same name as the directory bug create the missing runtimeconfig template json with teh following content json wasmhostproperties perhostconfig name browser html path index html host browser add the follwoing to your browser csproj xml true true true bug note that has a typo and uses wasmenablethreading instead of wasmenablethreads run dotnet build console dotnet build msbuild version preview for net determining projects to restore restored users alklig work playground hithread browser csproj in ms users alklig work nightly sdk rc sdks microsoft net sdk targets microsoft net runtimeidentifierinference targets message you are using a preview version of net see browser users alklig work playground hithread bin debug browser wasm browser dll compiling native assets with emcc with this may take a while corebindings c corebindings o pinvoke c pinvoke o driver c driver o linking for initial memory emccinitialheapsize bytes set this msbuild property to change the value linking with emcc with this may take a while exception frozen cache is set but cache file is missing sysroot lib emscripten libgl mt a in cache root path users alklig work nightly packs microsoft net runtime emscripten sdk osx alpha tools emscripten cache users alklig work nightly packs microsoft net runtime webassembly sdk rc sdk wasmapp native targets error the command emcc users alklig work nightly packs microsoft netcore app runtime mono multithread browser wasm rc runtimes browser wasm native src emcc default rsp users alklig work nightly packs microsoft netcore app runtime mono multithread browser wasm rc runtimes browser wasm native src emcc link rsp users alklig work playground hithread obj debug browser wasm wasm for build emcc link rsp exited with code bug we in our emsdk pack copy mt a from an upstream emsdk install into the emsdk pack console cp work dotnet runtime runtime src mono wasm emsdk upstream emscripten cache sysroot lib emscripten mt a work nightly packs microsoft net runtime emscripten sdk osx alpha tools emscripten cache sysroot lib emscripten dotnet build should now succeed dotnet run shoudl serve the app open the url in chrome and open dev tools and hit reload you should see something like this in the console console mono wasm worker initializing essential c exports and apis mono wasm worker initializing essential c exports and apis mono wasm worker initializing essential c exports and apis mono wasm worker initializing essential c exports and apis dotnet js mono wasm afterloadwasmmoduletoworker added message event handler worker onmessage ƒ onerror ƒ dotnet js mono wasm afterloadwasmmoduletoworker added message event handler worker onmessage ƒ onerror ƒ dotnet js mono wasm afterloadwasmmoduletoworker added message event handler worker onmessage ƒ onerror ƒ dotnet js mono wasm afterloadwasmmoduletoworker added message event handler worker onmessage ƒ onerror ƒ dotnet js mono wasm worker initializing essential c exports and apis dotnet js mono wasm worker initializing essential c exports and apis dotnet js mono wasm worker initializing essential c exports and apis dotnet js mono wasm worker initializing essential c exports and apis dotnet js mono wasm runtime ready dotnet js hello console clean the bin and obj directories change the browser csproj like this xml true true true that is replace wasmbuildnative by wasmgenerateappbundle run dotnet build again note that dotnet worker js isn t in the appbundle directory when running the app prints when trying to load it from the chrome devtools console console get not found dotnet worker js get not found dotnet worker js get not found dotnet worker js get not found dotnet js mono wasm afterloadwasmmoduletoworker added message event handler worker onmessage ƒ onerror ƒ dotnet js mono wasm afterloadwasmmoduletoworker added message event handler worker onmessage ƒ onerror ƒ dotnet js mono wasm afterloadwasmmoduletoworker added message event handler worker onmessage ƒ onerror ƒ dotnet js mono wasm afterloadwasmmoduletoworker added message event handler worker onmessage ƒ onerror ƒ | 0 |
159,124 | 6,040,986,915 | IssuesEvent | 2017-06-10 19:34:38 | tatools/sunshine | https://api.github.com/repos/tatools/sunshine | closed | Make sure sunshine prints all tests to be executed | Priority 1 | There is only one way to let a user know which tests will be run - print them to stdout. The requirement here is the following: the sunshine has to print all tests to be executed (after applying of configured filters). | 1.0 | Make sure sunshine prints all tests to be executed - There is only one way to let a user know which tests will be run - print them to stdout. The requirement here is the following: the sunshine has to print all tests to be executed (after applying of configured filters). | non_process | make sure sunshine prints all tests to be executed there is only one way to let a user know which tests will be run print them to stdout the requirement here is the following the sunshine has to print all tests to be executed after applying of configured filters | 0 |
11,563 | 14,440,039,529 | IssuesEvent | 2020-12-07 15:05:59 | jhu-idc/iDC-general | https://api.github.com/repos/jhu-idc/iDC-general | closed | Decorate the Graph to encapsulate the handler results as necessary (for example, support flagging descendants of a failed element to be skipped). | Graph Processor ingest | Elements in the Graph may be decorated by the Error Handler, directing the Processor to, for example, skip elements.
Estimate: 1 day | 1.0 | Decorate the Graph to encapsulate the handler results as necessary (for example, support flagging descendants of a failed element to be skipped). - Elements in the Graph may be decorated by the Error Handler, directing the Processor to, for example, skip elements.
Estimate: 1 day | process | decorate the graph to encapsulate the handler results as necessary for example support flagging descendants of a failed element to be skipped elements in the graph may be decorated by the error handler directing the processor to for example skip elements estimate day | 1 |
194,994 | 22,281,624,338 | IssuesEvent | 2022-06-11 01:16:23 | praneethpanasala/linux | https://api.github.com/repos/praneethpanasala/linux | reopened | CVE-2020-10773 (Medium) detected in linuxv4.19 | security vulnerability | ## CVE-2020-10773 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.19</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/s390/mm/cmm.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/s390/mm/cmm.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A stack information leak flaw was found in s390/s390x in the Linux kernel’s memory manager functionality, where it incorrectly writes to the /proc/sys/vm/cmm_timeout file. This flaw allows a local user to see the kernel data.
<p>Publish Date: 2020-09-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10773>CVE-2020-10773</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/gregkh/linux/commit/b8e51a6a9db94bc1fb18ae831b3dab106b5a4b5f">https://github.com/gregkh/linux/commit/b8e51a6a9db94bc1fb18ae831b3dab106b5a4b5f</a></p>
<p>Release Date: 2020-09-10</p>
<p>Fix Resolution: v5.4-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-10773 (Medium) detected in linuxv4.19 - ## CVE-2020-10773 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.19</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/s390/mm/cmm.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/s390/mm/cmm.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A stack information leak flaw was found in s390/s390x in the Linux kernel’s memory manager functionality, where it incorrectly writes to the /proc/sys/vm/cmm_timeout file. This flaw allows a local user to see the kernel data.
<p>Publish Date: 2020-09-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10773>CVE-2020-10773</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/gregkh/linux/commit/b8e51a6a9db94bc1fb18ae831b3dab106b5a4b5f">https://github.com/gregkh/linux/commit/b8e51a6a9db94bc1fb18ae831b3dab106b5a4b5f</a></p>
<p>Release Date: 2020-09-10</p>
<p>Fix Resolution: v5.4-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch master vulnerable source files arch mm cmm c arch mm cmm c vulnerability details a stack information leak flaw was found in in the linux kernel’s memory manager functionality where it incorrectly writes to the proc sys vm cmm timeout file this flaw allows a local user to see the kernel data publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
90,872 | 26,202,487,996 | IssuesEvent | 2023-01-03 18:53:53 | dotnet/arcade | https://api.github.com/repos/dotnet/arcade | closed | Build failed: dotnet-arcade-validation-official/main #20230103.1 | Build Failed | Build [#20230103.1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=2077872) partiallySucceeded
## :warning: : internal / dotnet-arcade-validation-official partiallySucceeded
### Summary
**Finished** - Tue, 03 Jan 2023 14:29:37 GMT
**Duration** - 114 minutes
**Requested for** - DotNet Bot
**Reason** - batchedCI
### Details
#### Promote Arcade to '.NET Eng - Latest' channel
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2077872/logs/354) - The latest build on 'main' branch for the 'runtime' repository was not successful.
### Changes
- [065f0fe3](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/065f0fe3ecedbe47cd1887c9f5486656959e3f11) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20230103.1 (#3588)
| 1.0 | Build failed: dotnet-arcade-validation-official/main #20230103.1 - Build [#20230103.1](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_build/results?buildId=2077872) partiallySucceeded
## :warning: : internal / dotnet-arcade-validation-official partiallySucceeded
### Summary
**Finished** - Tue, 03 Jan 2023 14:29:37 GMT
**Duration** - 114 minutes
**Requested for** - DotNet Bot
**Reason** - batchedCI
### Details
#### Promote Arcade to '.NET Eng - Latest' channel
- :warning: - [[Log]](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_apis/build/builds/2077872/logs/354) - The latest build on 'main' branch for the 'runtime' repository was not successful.
### Changes
- [065f0fe3](https://dev.azure.com/dnceng/7ea9116e-9fac-403d-b258-b31fcf1bb293/_git/017fb734-e4b4-4cc1-a90f-98a09ac25cd5/commit/065f0fe3ecedbe47cd1887c9f5486656959e3f11) - dotnet-maestro[bot] - Update dependencies from https://github.com/dotnet/arcade build 20230103.1 (#3588)
| non_process | build failed dotnet arcade validation official main build partiallysucceeded warning internal dotnet arcade validation official partiallysucceeded summary finished tue jan gmt duration minutes requested for dotnet bot reason batchedci details promote arcade to net eng latest channel warning the latest build on main branch for the runtime repository was not successful changes dotnet maestro update dependencies from build | 0 |
15,893 | 20,075,039,297 | IssuesEvent | 2022-02-04 11:43:45 | climatepolicyradar/navigator | https://api.github.com/repos/climatepolicyradar/navigator | opened | Group passages into passage blocks | Document processing | [unsupported is not supported]
Passage blocks contain multiple passages, and are often:
- Paragraphs
- Bulleted lists
- Numbered lists
- Indented lists
A block is identified as being logically distinct from surrounding blocks using separators such as whitespace, indents, bullet point symbols or list items.
The purpose of grouping text into blocks will be to enable a passage to be read by a user and interpreted by models in context.
| 1.0 | Group passages into passage blocks - [unsupported is not supported]
Passage blocks contain multiple passages, and are often:
- Paragraphs
- Bulleted lists
- Numbered lists
- Indented lists
A block is identified as being logically distinct from surrounding blocks using separators such as whitespace, indents, bullet point symbols or list items.
The purpose of grouping text into blocks will be to enable a passage to be read by a user and interpreted by models in context.
| process | group passages into passage blocks passage blocks contain multiple passages and are often paragraphs bulleted lists numbered lists indented lists a block is identified as being logically distinct from surrounding blocks using separators such as whitespace indents bullet point symbols or list items the purpose of grouping text into blocks will be to enable a passage to be read by a user and interpreted by models in context | 1 |
13,919 | 16,676,593,189 | IssuesEvent | 2021-06-07 16:57:51 | ESMValGroup/ESMValCore | https://api.github.com/repos/ESMValGroup/ESMValCore | opened | Load and use fx variable even if not broadcastable onto model cube data | enhancement preprocessor | Thanks to @sloosvel we now have a nice integration of anciliary vars for fx vars, including CMOR checks and that fancy jazz. There is, however, room for improvement, as I see it: there is a hard [cut-off](https://github.com/ESMValGroup/ESMValCore/blob/fc87d72c3fce12a2117b4b4e42a4857045c4d404/esmvalcore/preprocessor/_ancillary_vars.py#L34) that stops the process of using a certain fx var if its data array is not broadcastable onto the model data array: I think we can relax this a bit:
- if the fx data is time-invariant, and the model data is not, we can still use the fx data by propagating the fx data at every time point of the model data
- if the fx data is on a finer/coarser time axis than the model data, we can perform a time regridding and still use the fx data (and it'd even be broadcastable onto model data this time around)
These are statistical massages of the fx data that **may** not be sound, so please correct me here if you see any issues with these approaches :beer: | 1.0 | Load and use fx variable even if not broadcastable onto model cube data - Thanks to @sloosvel we now have a nice integration of anciliary vars for fx vars, including CMOR checks and that fancy jazz. There is, however, room for improvement, as I see it: there is a hard [cut-off](https://github.com/ESMValGroup/ESMValCore/blob/fc87d72c3fce12a2117b4b4e42a4857045c4d404/esmvalcore/preprocessor/_ancillary_vars.py#L34) that stops the process of using a certain fx var if its data array is not broadcastable onto the model data array: I think we can relax this a bit:
- if the fx data is time-invariant, and the model data is not, we can still use the fx data by propagating the fx data at every time point of the model data
- if the fx data is on a finer/coarser time axis than the model data, we can perform a time regridding and still use the fx data (and it'd even be broadcastable onto model data this time around)
These are statistical massages of the fx data that **may** not be sound, so please correct me here if you see any issues with these approaches :beer: | process | load and use fx variable even if not broadcastable onto model cube data thanks to sloosvel we now have a nice integration of anciliary vars for fx vars including cmor checks and that fancy jazz there is however room for improvement as i see it there is a hard that stops the process of using a certain fx var if its data array is not broadcastable onto the model data array i think we can relax this a bit if the fx data is time invariant and the model data is not we can still use the fx data by propagating the fx data at every time point of the model data if the fx data is on a finer coarser time axis than the model data we can perform a time regridding and still use the fx data and it d even be broadcastable onto model data this time around these are statistical massages of the fx data that may not be sound so please correct me here if you see any issues with these approaches beer | 1 |
281,177 | 21,315,382,748 | IssuesEvent | 2022-04-16 07:15:11 | jiewei98/pe | https://api.github.com/repos/jiewei98/pe | opened | Invalid link for Setting up, getting started | type.DocumentationBug severity.VeryLow | 
Upon clicking on the link under "Refer to the guide Setting up and getting started.", I was redirected to this page instead of a page explaining how to set up the app.
<!--session: 1650089759487-ced5a3fe-bf31-4e54-9643-af6622258d64-->
<!--Version: Web v3.4.2--> | 1.0 | Invalid link for Setting up, getting started - 
Upon clicking on the link under "Refer to the guide Setting up and getting started.", I was redirected to this page instead of a page explaining how to set up the app.
<!--session: 1650089759487-ced5a3fe-bf31-4e54-9643-af6622258d64-->
<!--Version: Web v3.4.2--> | non_process | invalid link for setting up getting started upon clicking on the link under refer to the guide setting up and getting started i was redirected to this page instead of a page explaining how to set up the app | 0 |
16,906 | 22,217,549,259 | IssuesEvent | 2022-06-08 04:23:01 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | closed | It was reported, that the path can be retrieved using `bazel query` and replacing `BUILD` with the script name, so that the whole `py_binary_path` hack can be avoided: | more data needed type: support / not a bug (process) | It was reported, that the path can be retrieved using `bazel query` and replacing `BUILD` with the script name, so that the whole `py_binary_path` hack can be avoided:
```
`bazel query @com_googlesource_gerrit_bazlets//tools/eclipse:project --output location | sed s/BUILD:.*//`project.py
```
_Originally posted by @davido in https://github.com/bazelbuild/bazel/issues/2452#issuecomment-316895752_ | 1.0 | It was reported, that the path can be retrieved using `bazel query` and replacing `BUILD` with the script name, so that the whole `py_binary_path` hack can be avoided: - It was reported, that the path can be retrieved using `bazel query` and replacing `BUILD` with the script name, so that the whole `py_binary_path` hack can be avoided:
```
`bazel query @com_googlesource_gerrit_bazlets//tools/eclipse:project --output location | sed s/BUILD:.*//`project.py
```
_Originally posted by @davido in https://github.com/bazelbuild/bazel/issues/2452#issuecomment-316895752_ | process | it was reported that the path can be retrieved using bazel query and replacing build with the script name so that the whole py binary path hack can be avoided it was reported that the path can be retrieved using bazel query and replacing build with the script name so that the whole py binary path hack can be avoided bazel query com googlesource gerrit bazlets tools eclipse project output location sed s build project py originally posted by davido in | 1 |
9,599 | 12,543,727,698 | IssuesEvent | 2020-06-05 16:01:47 | spring-projects/spring-hateoas | https://api.github.com/repos/spring-projects/spring-hateoas | closed | Cannot retrieve embedded collection of resources | process: waiting for feedback question | I have a Spring-Data-Rest project that exposes `Question` objects. I would like to retrieve this list using Spring-Hateoas, but when I make my request, the embedded Questions appear as links in the Resource object rather than as full Question objects. This means I would need to re-fetch each question to get the actual content, even though the content was already provided in the actual response. Is this intentional behavior?
My client code looks like this:
``` java
ResponseEntity<Resources<Question>> responseEntity =
restTemplate.exchange("https://example.com/data/questions",
HttpMethod.GET, null,
new ParameterizedTypeReference<Resources<Question>>() {},
Collections.emptyMap());
if (responseEntity.getStatusCode() == HttpStatus.OK) {
Resources<Question> questionsResource = responseEntity.getBody();
Collection<Question> questions = questionsResource.getContent();
}
```
This code executes, but the questions Collection is empty. When I call `getLinks()`, I see that all of my `Question` objects are listed there (as Link objects), despite not being returned as links in the HTTP response.
I have the same experience when I use `Resource<Question>` instead of `Resources<Question>`.
Here is an example of the response from `.../data/questions`:
``` java
{
"_links": {
"self": {
"href": "https://example.com/data/questions{?page,size,sort}",
"templated": true
},
"next": {
"href": "https://example.com/data/questions?page=1&size=20{&sort}",
"templated": true
},
"search": {
"href": "https://example.com/data/questions/search"
}
},
"_embedded": {
"questions": [
{
"questionText": "What is your favorite color?",
"_links": {
"self": {
"href": "https://example.com/data/questions/1"
}
}
}
}
}
```
| 1.0 | Cannot retrieve embedded collection of resources - I have a Spring-Data-Rest project that exposes `Question` objects. I would like to retrieve this list using Spring-Hateoas, but when I make my request, the embedded Questions appear as links in the Resource object rather than as full Question objects. This means I would need to re-fetch each question to get the actual content, even though the content was already provided in the actual response. Is this intentional behavior?
My client code looks like this:
``` java
ResponseEntity<Resources<Question>> responseEntity =
restTemplate.exchange("https://example.com/data/questions",
HttpMethod.GET, null,
new ParameterizedTypeReference<Resources<Question>>() {},
Collections.emptyMap());
if (responseEntity.getStatusCode() == HttpStatus.OK) {
Resources<Question> questionsResource = responseEntity.getBody();
Collection<Question> questions = questionsResource.getContent();
}
```
This code executes, but the questions Collection is empty. When I call `getLinks()`, I see that all of my `Question` objects are listed there (as Link objects), despite not being returned as links in the HTTP response.
I have the same experience when I use `Resource<Question>` instead of `Resources<Question>`.
Here is an example of the response from `.../data/questions`:
``` java
{
"_links": {
"self": {
"href": "https://example.com/data/questions{?page,size,sort}",
"templated": true
},
"next": {
"href": "https://example.com/data/questions?page=1&size=20{&sort}",
"templated": true
},
"search": {
"href": "https://example.com/data/questions/search"
}
},
"_embedded": {
"questions": [
{
"questionText": "What is your favorite color?",
"_links": {
"self": {
"href": "https://example.com/data/questions/1"
}
}
}
}
}
```
| process | cannot retrieve embedded collection of resources i have a spring data rest project that exposes question objects i would like to retrieve this list using spring hateoas but when i make my request the embedded questions appear as links in the resource object rather than as full question objects this means i would need to re fetch each question to get the actual content even though the content was already provided in the actual response is this intentional behavior my client code looks like this java responseentity responseentity resttemplate exchange httpmethod get null new parameterizedtypereference collections emptymap if responseentity getstatuscode httpstatus ok resources questionsresource responseentity getbody collection questions questionsresource getcontent this code executes but the questions collection is empty when i call getlinks i see that all of my question objects are listed there as link objects despite not being returned as links in the http response i have the same experience when i use resource instead of resources here is an example of the response from data questions java links self href templated true next href templated true search href embedded questions questiontext what is your favorite color links self href | 1 |
308,336 | 23,244,188,628 | IssuesEvent | 2022-08-03 18:24:20 | TriEmbed/quercus | https://api.github.com/repos/TriEmbed/quercus | closed | LICENSE updates | documentation | A single LICENSE file should have an MIT license for software (all subdirs except hardware that will contain white, green and purple "soon") with active devs and attribution for the dev who created the Vue support in aardvark plus a CC0 license for the hardware with the original and active devs listed. | 1.0 | LICENSE updates - A single LICENSE file should have an MIT license for software (all subdirs except hardware that will contain white, green and purple "soon") with active devs and attribution for the dev who created the Vue support in aardvark plus a CC0 license for the hardware with the original and active devs listed. | non_process | license updates a single license file should have an mit license for software all subdirs except hardware that will contain white green and purple soon with active devs and attribution for the dev who created the vue support in aardvark plus a license for the hardware with the original and active devs listed | 0 |
192,891 | 15,361,295,554 | IssuesEvent | 2021-03-01 17:57:50 | dankelley/oce | https://api.github.com/repos/dankelley/oce | closed | as.ctd documentation needs updating (at least on website) | documentation website | Was just looking at the `as.ctd()` documentation on the pkgdown website at https://dankelley.github.io/oce/reference/as.ctd.html and I see as below:

The question is, "***CAN*** the `salinity` argument be an `rsk` object????" 😄 | 1.0 | as.ctd documentation needs updating (at least on website) - Was just looking at the `as.ctd()` documentation on the pkgdown website at https://dankelley.github.io/oce/reference/as.ctd.html and I see as below:

The question is, "***CAN*** the `salinity` argument be an `rsk` object????" 😄 | non_process | as ctd documentation needs updating at least on website was just looking at the as ctd documentation on the pkgdown website at and i see as below the question is can the salinity argument be an rsk object 😄 | 0 |
8,211 | 11,404,848,911 | IssuesEvent | 2020-01-31 10:41:20 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | Merge: "GO:0002818 intracellular defense response" into GO:0006968 cellular defense response | multi-species process obsoletion | I am checking a paper about an extracellular protease inhibitor which inhibits extracellular plant proteases.
PMID:15096512
@CuzickA has found "intracellular defense response" (a new one to me)
and hence sugggested "extracellular defense response"
We could just do
serine-type peptidase inhibitor activity
part_of "defense response" occurs_in extracellular space
but in this case I would object to "intracellular defense response" as a term.
Suggestions?
| 1.0 | Merge: "GO:0002818 intracellular defense response" into GO:0006968 cellular defense response - I am checking a paper about an extracellular protease inhibitor which inhibits extracellular plant proteases.
PMID:15096512
@CuzickA has found "intracellular defense response" (a new one to me)
and hence sugggested "extracellular defense response"
We could just do
serine-type peptidase inhibitor activity
part_of "defense response" occurs_in extracellular space
but in this case I would object to "intracellular defense response" as a term.
Suggestions?
| process | merge go intracellular defense response into go cellular defense response i am checking a paper about an extracellular protease inhibitor which inhibits extracellular plant proteases pmid cuzicka has found intracellular defense response a new one to me and hence sugggested extracellular defense response we could just do serine type peptidase inhibitor activity part of defense response occurs in extracellular space but in this case i would object to intracellular defense response as a term suggestions | 1 |
151,264 | 12,026,106,379 | IssuesEvent | 2020-04-12 12:40:12 | Students-of-the-city-of-Kostroma/trpo_automation | https://api.github.com/repos/Students-of-the-city-of-Kostroma/trpo_automation | closed | Юнит тесты методов отправки данных (почтовому сервису) | Server Testing | # Срок сдачи: 20.04.20
## Задачи разработки: #48
### Epic: #209
_Пояснения_: нет | 1.0 | Юнит тесты методов отправки данных (почтовому сервису) - # Срок сдачи: 20.04.20
## Задачи разработки: #48
### Epic: #209
_Пояснения_: нет | non_process | юнит тесты методов отправки данных почтовому сервису срок сдачи задачи разработки epic пояснения нет | 0 |
3,659 | 6,694,645,898 | IssuesEvent | 2017-10-10 03:24:37 | HelpyTeam/HelpyDocuments | https://api.github.com/repos/HelpyTeam/HelpyDocuments | closed | Create API Document for <User> Chat | API Document In Process priority/1 | # Overview
Create API for feature <User> Get Chat.
# Target
- [ ] Swagger API document. | 1.0 | Create API Document for <User> Chat - # Overview
Create API for feature <User> Get Chat.
# Target
- [ ] Swagger API document. | process | create api document for chat overview create api for feature get chat target swagger api document | 1 |
5,457 | 8,318,571,365 | IssuesEvent | 2018-09-25 14:58:39 | bitshares/bitshares-community-ui | https://api.github.com/repos/bitshares/bitshares-community-ui | closed | Button component | good first issue process ui | Use `Components/Button.vue`
- Styling - use tailwind classes
- Should receive props & display accordingly:
1) :text - [String] button text
2) :disabled - [Boolean] if button disabled or not
3) :loading - [Boolean] if button has a loading state -> display some css loader inside (make loader a component as well)
4) :size= - [String] optional, "normal" - default, "small" - smaller
5) :width - [String] optional, "full" -> 100% width
6) :type - [String] optional, "round" should make the button round
- Should $emit 'click' event on click
- Should have Styleguidist Button doc with examples
- Covered by jest unit-tests | 1.0 | Button component - Use `Components/Button.vue`
- Styling - use tailwind classes
- Should receive props & display accordingly:
1) :text - [String] button text
2) :disabled - [Boolean] if button disabled or not
3) :loading - [Boolean] if button has a loading state -> display some css loader inside (make loader a component as well)
4) :size= - [String] optional, "normal" - default, "small" - smaller
5) :width - [String] optional, "full" -> 100% width
6) :type - [String] optional, "round" should make the button round
- Should $emit 'click' event on click
- Should have Styleguidist Button doc with examples
- Covered by jest unit-tests | process | button component use components button vue styling use tailwind classes should receive props display accordingly text button text disabled if button disabled or not loading if button has a loading state display some css loader inside make loader a component as well size optional normal default small smaller width optional full width type optional round should make the button round should emit click event on click should have styleguidist button doc with examples covered by jest unit tests | 1 |
20,973 | 27,819,580,250 | IssuesEvent | 2023-03-19 03:39:19 | cse442-at-ub/project_s23-the-fellas | https://api.github.com/repos/cse442-at-ub/project_s23-the-fellas | closed | Create a DB table with user accounts for all team members using everyone's UBIT name | Processing Task Sprint 1 Sprint 2 | **Task Tests**
*Test 1*
1) Go to the login page for phpMyAdmin
2) Sign into the oceanus server using UBIT name as username and person number as password
3) Under the Databases tab, select the cse442_2023_spring_team_c_db database
4) Verify that there is a table named "user_accounts"
5) Verify that the user_accounts table has two columns: one named "username" and one named "password"
6) Verify that the user accounts table has four entries with usernames "drboyle2", "devincle", "dondreta", and "jtsang3".
7) Verify that all four entries in the user accounts table have "test" in the password column. | 1.0 | Create a DB table with user accounts for all team members using everyone's UBIT name - **Task Tests**
*Test 1*
1) Go to the login page for phpMyAdmin
2) Sign into the oceanus server using UBIT name as username and person number as password
3) Under the Databases tab, select the cse442_2023_spring_team_c_db database
4) Verify that there is a table named "user_accounts"
5) Verify that the user_accounts table has two columns: one named "username" and one named "password"
6) Verify that the user accounts table has four entries with usernames "drboyle2", "devincle", "dondreta", and "jtsang3".
7) Verify that all four entries in the user accounts table have "test" in the password column. | process | create a db table with user accounts for all team members using everyone s ubit name task tests test go to the login page for phpmyadmin sign into the oceanus server using ubit name as username and person number as password under the databases tab select the spring team c db database verify that there is a table named user accounts verify that the user accounts table has two columns one named username and one named password verify that the user accounts table has four entries with usernames devincle dondreta and verify that all four entries in the user accounts table have test in the password column | 1 |
5,131 | 7,917,838,927 | IssuesEvent | 2018-07-04 11:13:00 | Open-EO/openeo-api | https://api.github.com/repos/Open-EO/openeo-api | opened | Pagination and (Open)Search for jobs, services, files, process_graphs | in discussion job management process graph management service management user management | I'm wondering whether it could be useful to introduce (Open)Search and pagination to the collections, i.e. for the list of jobs, services, files, process_graphs? I'm not speaking about the response format, but about the query parameters. | 1.0 | Pagination and (Open)Search for jobs, services, files, process_graphs - I'm wondering whether it could be useful to introduce (Open)Search and pagination to the collections, i.e. for the list of jobs, services, files, process_graphs? I'm not speaking about the response format, but about the query parameters. | process | pagination and open search for jobs services files process graphs i m wondering whether it could be useful to introduce open search and pagination to the collections i e for the list of jobs services files process graphs i m not speaking about the response format but about the query parameters | 1 |
12,465 | 14,937,391,216 | IssuesEvent | 2021-01-25 14:35:06 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Android] [Audit Logs] "userID" is displayed null for the events | Android Bug P2 Process: Fixed Process: Tested dev | Event:-
1. PASSWORD_RESET_SUCCEEDED
Sample snippet
```
{
"insertId": "1xxk52ag3ixdmsf",
"jsonPayload": {
"description": null,
"resourceServer": "PARTICIPANT USER DATASTORE",
"appId": "BTCDEV001",
"userIp": "157.45.162.176",
"destination": "SCIM AUTH SERVER",
"mobilePlatform": "ANDROID",
"occurred": 1611240030322,
"userAccessLevel": null,
"siteId": null,
"participantId": null,
"destinationApplicationVersion": "1.0",
"sourceApplicationVersion": "1.0",
"correlationId": "SCJfbJLx83G45JpSFBeYYtE6tEESvAuvJPNkiqshEiPJfV8BhP",
"studyId": null,
"appVersion": "1.0.58",
"userId": null,
"source": "MOBILE APPS",
"eventCode": "PASSWORD_RESET_SUCCEEDED",
"platformVersion": "1.0",
"studyVersion": null
},
"resource": {
"type": "global",
"labels": {
"project_id": "mystudies-open-impl-track1-dev"
}
},
"timestamp": "2021-01-21T14:40:30.322Z",
"severity": "INFO",
"logName": "projects/mystudies-open-impl-track1-dev/logs/application-audit-log",
"receiveTimestamp": "2021-01-21T14:40:30.452503878Z"
}
``` | 2.0 | [Android] [Audit Logs] "userID" is displayed null for the events - Event:-
1. PASSWORD_RESET_SUCCEEDED
Sample snippet
```
{
"insertId": "1xxk52ag3ixdmsf",
"jsonPayload": {
"description": null,
"resourceServer": "PARTICIPANT USER DATASTORE",
"appId": "BTCDEV001",
"userIp": "157.45.162.176",
"destination": "SCIM AUTH SERVER",
"mobilePlatform": "ANDROID",
"occurred": 1611240030322,
"userAccessLevel": null,
"siteId": null,
"participantId": null,
"destinationApplicationVersion": "1.0",
"sourceApplicationVersion": "1.0",
"correlationId": "SCJfbJLx83G45JpSFBeYYtE6tEESvAuvJPNkiqshEiPJfV8BhP",
"studyId": null,
"appVersion": "1.0.58",
"userId": null,
"source": "MOBILE APPS",
"eventCode": "PASSWORD_RESET_SUCCEEDED",
"platformVersion": "1.0",
"studyVersion": null
},
"resource": {
"type": "global",
"labels": {
"project_id": "mystudies-open-impl-track1-dev"
}
},
"timestamp": "2021-01-21T14:40:30.322Z",
"severity": "INFO",
"logName": "projects/mystudies-open-impl-track1-dev/logs/application-audit-log",
"receiveTimestamp": "2021-01-21T14:40:30.452503878Z"
}
``` | process | userid is displayed null for the events event password reset succeeded sample snippet insertid jsonpayload description null resourceserver participant user datastore appid userip destination scim auth server mobileplatform android occurred useraccesslevel null siteid null participantid null destinationapplicationversion sourceapplicationversion correlationid studyid null appversion userid null source mobile apps eventcode password reset succeeded platformversion studyversion null resource type global labels project id mystudies open impl dev timestamp severity info logname projects mystudies open impl dev logs application audit log receivetimestamp | 1 |
54,259 | 23,218,071,206 | IssuesEvent | 2022-08-02 15:36:03 | internetarchive/openlibrary | https://api.github.com/repos/internetarchive/openlibrary | opened | Create Roadmap for Cover Service improvements | Type: Feature Request Module: Cover Service Needs: Detail Priority: 2 Lead: @cclauss | <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
### Describe the problem that you'd like solved
<!-- A clear and concise description of what you want to happen. -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
<!-- Which suggestions or requirements should be considered for how feature needs to appear or be implemented? -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
| 1.0 | Create Roadmap for Cover Service improvements - <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
### Describe the problem that you'd like solved
<!-- A clear and concise description of what you want to happen. -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
<!-- Which suggestions or requirements should be considered for how feature needs to appear or be implemented? -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
### Stakeholders
<!-- @ tag stakeholders of this bug -->
| non_process | create roadmap for cover service improvements describe the problem that you d like solved proposal constraints additional context stakeholders | 0 |
171,350 | 27,106,178,569 | IssuesEvent | 2023-02-15 12:17:10 | open-sauced/insights | https://api.github.com/repos/open-sauced/insights | closed | Feature: add Toast support to application | 💡 feature 🖍 needs design 👀 needs triage | ### Type of feature
🍕 Feature
### Current behavior
We have no toast implemented on the insights app
### Suggested solution
Just like we have in https://github.com/open-sauced/hot/blob/beta/src/lib/reactHotToast.ts... This should be added to the insights.
designs will be needed for the popup in four states:
- success
- error
- warning
- and default
cc: @getaheaddev
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
### Contributing Docs
- [X] I agree to follow this project's Contribution Docs | 1.0 | Feature: add Toast support to application - ### Type of feature
🍕 Feature
### Current behavior
We have no toast implemented on the insights app
### Suggested solution
Just like we have in https://github.com/open-sauced/hot/blob/beta/src/lib/reactHotToast.ts... This should be added to the insights.
designs will be needed for the popup in four states:
- success
- error
- warning
- and default
cc: @getaheaddev
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
### Contributing Docs
- [X] I agree to follow this project's Contribution Docs | non_process | feature add toast support to application type of feature 🍕 feature current behavior we have no toast implemented on the insights app suggested solution just like we have in this should be added to the insights designs will be needed for the popup in four states success error warning and default cc getaheaddev additional context no response code of conduct i agree to follow this project s code of conduct contributing docs i agree to follow this project s contribution docs | 0 |
11,385 | 14,222,946,777 | IssuesEvent | 2020-11-17 17:32:04 | googleapis/repo-automation-bots | https://api.github.com/repos/googleapis/repo-automation-bots | closed | make `addOrUpdateIssueComment` more accesible | type: process | Now it requires `probot.Context`, but it only needs Octokit. | 1.0 | make `addOrUpdateIssueComment` more accesible - Now it requires `probot.Context`, but it only needs Octokit. | process | make addorupdateissuecomment more accesible now it requires probot context but it only needs octokit | 1 |
301,622 | 22,766,153,886 | IssuesEvent | 2022-07-08 04:44:33 | k01ek/netbox-devicetype-importer | https://api.github.com/repos/k01ek/netbox-devicetype-importer | closed | What Scopes are required for Github token? | documentation question | What scopes are required for the Github token? I only want to give minimal permissions for hopefully obvious reasons. | 1.0 | What Scopes are required for Github token? - What scopes are required for the Github token? I only want to give minimal permissions for hopefully obvious reasons. | non_process | what scopes are required for github token what scopes are required for the github token i only want to give minimal permissions for hopefully obvious reasons | 0 |
12,832 | 15,213,874,051 | IssuesEvent | 2021-02-17 12:26:16 | trilinos/Trilinos | https://api.github.com/repos/trilinos/Trilinos | closed | Add written process for invoking a retrospective process in Trilinos | CLOSED_DUE_TO_INACTIVITY MARKED_FOR_CLOSURE process improvement retrospective | For issues such as #594 we want to create a follow-on issue such as #632 to record a retrospective that can be used to improve development processes in the future.
We should add a write-up to the Trilinos wiki describing this process:
- Criteria for when a retrospective is useful.
- Use of the retrospective label.
- Format of retrospective.
- Location of the retrospective content.
| 1.0 | Add written process for invoking a retrospective process in Trilinos - For issues such as #594 we want to create a follow-on issue such as #632 to record a retrospective that can be used to improve development processes in the future.
We should add a write-up to the Trilinos wiki describing this process:
- Criteria for when a retrospective is useful.
- Use of the retrospective label.
- Format of retrospective.
- Location of the retrospective content.
| process | add written process for invoking a retrospective process in trilinos for issues such as we want to create a follow on issue such as to record a retrospective that can be used to improve development processes in the future we should add a write up to the trilinos wiki describing this process criteria for when a retrospective is useful use of the retrospective label format of retrospective location of the retrospective content | 1 |
400,217 | 11,770,678,091 | IssuesEvent | 2020-03-15 20:18:21 | Extum/material | https://api.github.com/repos/Extum/material | closed | [Mobile] Drawer notification icon problem | bug help wanted mobile priority: high | **Describe the bug**
If you open the forum in mobile, you can see, the notifications icon (and card) is wrong-sized
**To Reproduce**
Steps to reproduce the behavior:
1. Go to the test forum on mobile
2. Click on Log In
3. After Logging in, expand the drawer
4. See error
**Expected behavior**
The notifications section's card looks normal
**Screenshots**

**Environment (please complete the following information):**
- OS: Windows 10 Home && Android
- Browser: Opera
- Flarum Url: materialtheme.freeflarum.com
| 1.0 | [Mobile] Drawer notification icon problem - **Describe the bug**
If you open the forum in mobile, you can see, the notifications icon (and card) is wrong-sized
**To Reproduce**
Steps to reproduce the behavior:
1. Go to the test forum on mobile
2. Click on Log In
3. After Logging in, expand the drawer
4. See error
**Expected behavior**
The notifications section's card looks normal
**Screenshots**

**Environment (please complete the following information):**
- OS: Windows 10 Home && Android
- Browser: Opera
- Flarum Url: materialtheme.freeflarum.com
| non_process | drawer notification icon problem describe the bug if you open the forum in mobile you can see the notifications icon and card is wrong sized to reproduce steps to reproduce the behavior go to the test forum on mobile click on log in after logging in expand the drawer see error expected behavior the notifications section s card looks normal screenshots environment please complete the following information os windows home android browser opera flarum url materialtheme freeflarum com | 0 |
22,712 | 32,037,654,389 | IssuesEvent | 2023-09-22 16:34:14 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | [MLv2] Port `engine` to MLv2 | Querying/Native .Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench: | There're a few MLv1 methods the editor uses that'd be nice to port to MLv2 as well:
`engine` returns a DB engine name given a query
https://github.com/metabase/metabase/blob/dbfca6c6d173294ddcf97b394750574b4ef10221/frontend/src/metabase-lib/queries/NativeQuery.ts#L148 | 1.0 | [MLv2] Port `engine` to MLv2 - There're a few MLv1 methods the editor uses that'd be nice to port to MLv2 as well:
`engine` returns a DB engine name given a query
https://github.com/metabase/metabase/blob/dbfca6c6d173294ddcf97b394750574b4ef10221/frontend/src/metabase-lib/queries/NativeQuery.ts#L148 | process | port engine to there re a few methods the editor uses that d be nice to port to as well engine returns a db engine name given a query | 1 |
65,590 | 27,148,858,486 | IssuesEvent | 2023-02-16 22:35:03 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | opened | Set up monitoring and alerts for 201 project sets | training ops and shared services | **Describe the issue**
Alerts are created for the namespaces associated with the license plates below
- [ ] Alert for misconfigured PDB
- [.] Alert for long-running load test
**Additional context**
Related docs that may assist with creating prometheus rules & alerts:
https://docs.openshift.com/container-platform/4.11/monitoring/managing-alerts.html#creating-alerting-rules-for-user-defined-projects_managing-alerts
https://docs.openshift.com/container-platform/4.11/monitoring/managing-alerts.html#creating-alert-routing-for-user-defined-projects_managing-alerts
https://prometheus.io/docs/alerting/latest/configuration/#email_config
Steven provided this example of a cluster wide alert for PDB's that can form a template for the PDB alerting. Formatting lost in MSTeam copy paste:
`alert: PodDisruptionBudgetAtLimit annotations: description: The pod disruption budget is at the minimum disruptions allowed level. The number of current healthy pods is equal to the desired healthy pods. runbook_url: https://github.com/openshift/runbooks/blob/master/alerts/cluster-kube-controller-manager-operator/PodDisruptionBudgetAtLimit.md summary: The pod disruption budget is preventing further disruption to pods. expr: | max by(namespace, poddisruptionbudget) (kube_poddisruptionbudget_status_current_healthy == kube_poddisruptionbudget_status_desired_healthy and on (namespace, poddisruptionbudget) kube_poddisruptionbudget_status_expected_pods > 0) for: 60m labels: severity: warning - alert: PodDisruptionBudgetLimit annotations: description: The pod disruption budget is below the minimum disruptions allowed level and is not satisfied. The number of current healthy pods is less than the desired healthy pods. runbook_url: https://github.com/openshift/runbooks/blob/master/alerts/cluster-kube-controller-manager-operator/PodDisruptionBudgetLimit.md summary: The pod disruption budget registers insufficient amount of pods. expr: | max by (namespace, poddisruptionbudget) (kube_poddisruptionbudget_status_current_healthy < kube_poddisruptionbudget_status_desired_healthy) for: 15m labels: severity: critical`
Steven suggested this as a starting point for a load test rule:
`kube_deployment_spec_replicas{deployment="load-test",namespace=~"ad204f-.*|ea8776-.*"} > 0`
_"some trial and error will be needed. I think you could create the rule without the namespace part, in each namespace and it should scope it for you."_
https://github.com/bcgov-c/platform-ops/blob/main/roles/config-infra/templates/alertmanager.yaml.j2 is the cluster scoped alertmanager config to copy some email routing settings from
The OpenShift 201 lab namespaces have the following license plates:
eca693
ecac3f
c2a206
be8d7e
ede379
e1f581
aa15e2
c0571b
bc454e
bba23e
b353f4
cf8581
ef3c25
c37719
fe64f4
eb7e66
b80657
e3fa55
f73ace
b51b5b
cb1c00
**How does this benefit the users of our platform?**
**Definition of done**
Identify what will need to happen/be delivered for this to be completely done
| 1.0 | Set up monitoring and alerts for 201 project sets - **Describe the issue**
Alerts are created for the namespaces associated with the license plates below
- [ ] Alert for misconfigured PDB
- [.] Alert for long-running load test
**Additional context**
Related docs that may assist with creating prometheus rules & alerts:
https://docs.openshift.com/container-platform/4.11/monitoring/managing-alerts.html#creating-alerting-rules-for-user-defined-projects_managing-alerts
https://docs.openshift.com/container-platform/4.11/monitoring/managing-alerts.html#creating-alert-routing-for-user-defined-projects_managing-alerts
https://prometheus.io/docs/alerting/latest/configuration/#email_config
Steven provided this example of a cluster wide alert for PDB's that can form a template for the PDB alerting. Formatting lost in MSTeam copy paste:
`alert: PodDisruptionBudgetAtLimit annotations: description: The pod disruption budget is at the minimum disruptions allowed level. The number of current healthy pods is equal to the desired healthy pods. runbook_url: https://github.com/openshift/runbooks/blob/master/alerts/cluster-kube-controller-manager-operator/PodDisruptionBudgetAtLimit.md summary: The pod disruption budget is preventing further disruption to pods. expr: | max by(namespace, poddisruptionbudget) (kube_poddisruptionbudget_status_current_healthy == kube_poddisruptionbudget_status_desired_healthy and on (namespace, poddisruptionbudget) kube_poddisruptionbudget_status_expected_pods > 0) for: 60m labels: severity: warning - alert: PodDisruptionBudgetLimit annotations: description: The pod disruption budget is below the minimum disruptions allowed level and is not satisfied. The number of current healthy pods is less than the desired healthy pods. runbook_url: https://github.com/openshift/runbooks/blob/master/alerts/cluster-kube-controller-manager-operator/PodDisruptionBudgetLimit.md summary: The pod disruption budget registers insufficient amount of pods. expr: | max by (namespace, poddisruptionbudget) (kube_poddisruptionbudget_status_current_healthy < kube_poddisruptionbudget_status_desired_healthy) for: 15m labels: severity: critical`
Steven suggested this as a starting point for a load test rule:
`kube_deployment_spec_replicas{deployment="load-test",namespace=~"ad204f-.*|ea8776-.*"} > 0`
_"some trial and error will be needed. I think you could create the rule without the namespace part, in each namespace and it should scope it for you."_
https://github.com/bcgov-c/platform-ops/blob/main/roles/config-infra/templates/alertmanager.yaml.j2 is the cluster scoped alertmanager config to copy some email routing settings from
The OpenShift 201 lab namespaces have the following license plates:
eca693
ecac3f
c2a206
be8d7e
ede379
e1f581
aa15e2
c0571b
bc454e
bba23e
b353f4
cf8581
ef3c25
c37719
fe64f4
eb7e66
b80657
e3fa55
f73ace
b51b5b
cb1c00
**How does this benefit the users of our platform?**
**Definition of done**
Identify what will need to happen/be delivered for this to be completely done
| non_process | set up monitoring and alerts for project sets describe the issue alerts are created for the namespaces associated with the license plates below alert for misconfigured pdb alert for long running load test additional context related docs that may assist with creating prometheus rules alerts steven provided this example of a cluster wide alert for pdb s that can form a template for the pdb alerting formatting lost in msteam copy paste alert poddisruptionbudgetatlimit annotations description the pod disruption budget is at the minimum disruptions allowed level the number of current healthy pods is equal to the desired healthy pods runbook url summary the pod disruption budget is preventing further disruption to pods expr max by namespace poddisruptionbudget kube poddisruptionbudget status current healthy kube poddisruptionbudget status desired healthy and on namespace poddisruptionbudget kube poddisruptionbudget status expected pods for labels severity warning alert poddisruptionbudgetlimit annotations description the pod disruption budget is below the minimum disruptions allowed level and is not satisfied the number of current healthy pods is less than the desired healthy pods runbook url summary the pod disruption budget registers insufficient amount of pods expr max by namespace poddisruptionbudget kube poddisruptionbudget status current healthy kube poddisruptionbudget status desired healthy for labels severity critical steven suggested this as a starting point for a load test rule kube deployment spec replicas deployment load test namespace some trial and error will be needed i think you could create the rule without the namespace part in each namespace and it should scope it for you is the cluster scoped alertmanager config to copy some email routing settings from the openshift lab namespaces have the following license plates how does this benefit the users of our platform definition of done identify what will need to happen be delivered for this to be completely done | 0 |
5,500 | 8,364,799,269 | IssuesEvent | 2018-10-04 01:03:52 | googleapis/google-cloud-python | https://api.github.com/repos/googleapis/google-cloud-python | closed | Asset: release asset 0.2.0 | api:asset packaging type: process | The original release (#5988) used `google-cloud-cloudasset` as the PyPI name. We need a new release to push our preferred name.
@theacodes RFC: do we need to make a `google-cloud-cloudasset 0.1.post1` release which emits a deprecation warning and points to `google-cloud-asset`? | 1.0 | Asset: release asset 0.2.0 - The original release (#5988) used `google-cloud-cloudasset` as the PyPI name. We need a new release to push our preferred name.
@theacodes RFC: do we need to make a `google-cloud-cloudasset 0.1.post1` release which emits a deprecation warning and points to `google-cloud-asset`? | process | asset release asset the original release used google cloud cloudasset as the pypi name we need a new release to push our preferred name theacodes rfc do we need to make a google cloud cloudasset release which emits a deprecation warning and points to google cloud asset | 1 |
18,514 | 24,551,625,535 | IssuesEvent | 2022-10-12 13:02:40 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Participant manager] [Study participant registry] Searched results are not displaying for few participant email address | Bug P1 Participant manager Process: Fixed Process: Tested QA Process: Tested dev | Steps:-
1. Login into PM
2. Navigate to Study participant registry
3. Search for the participant email similar to **abc+1@xyz.com** which is available in the added participant email list in the Search bar and observe
A/R:- Displaying as 'No records found'
E/R:- App should display search results if the email is available in the list for that study


| 3.0 | [Participant manager] [Study participant registry] Searched results are not displaying for few participant email address - Steps:-
1. Login into PM
2. Navigate to Study participant registry
3. Search for the participant email similar to **abc+1@xyz.com** which is available in the added participant email list in the Search bar and observe
A/R:- Displaying as 'No records found'
E/R:- App should display search results if the email is available in the list for that study


| process | searched results are not displaying for few participant email address steps login into pm navigate to study participant registry search for the participant email similar to abc xyz com which is available in the added participant email list in the search bar and observe a r displaying as no records found e r app should display search results if the email is available in the list for that study | 1 |
21,498 | 29,661,828,171 | IssuesEvent | 2023-06-10 08:50:36 | nextflow-io/nextflow | https://api.github.com/repos/nextflow-io/nextflow | closed | Process using exec: fails if using storeDir | stale lang/processes | ## Bug report
### Expected behavior and actual behavior
When using a process with an `exec:` block defined and a `storeDir` defined the process always fails to find the file despite it existing in `task.workDir`
### Steps to reproduce the problem
```
import groovy.json.JsonOutput
process foo {
storeDir "./cache"
input:
tuple val(uuid), val(map_value)
output:
tuple val(uuid), path("${uuid}.json"), emit: json
exec:
def outfile = task.workDir.resolve("${uuid}.json")
def json_str = JsonOutput.toJson(map_value)
File out = new File(outfile.toString())
println outfile
out.write(json_str)
}
input_channel = Channel.from("1", "2")
.map {uuid -> [ uuid, [key: "value_${uuid}" ] ] }
workflow {
foo(input_channel)
}
```
### Program output
Error suggesting the file "1.json" or "2.json" is missing from the working directory
### Environment
* Nextflow version: 22.04.5.5708
* Java version: openjdk 11.0.16.1
* Operating system: Linux
* Bash version: 5.1.16
### Additional context
Removing the `storeDir` directive results in the process completing without issue
| 1.0 | Process using exec: fails if using storeDir - ## Bug report
### Expected behavior and actual behavior
When using a process with an `exec:` block defined and a `storeDir` defined the process always fails to find the file despite it existing in `task.workDir`
### Steps to reproduce the problem
```
import groovy.json.JsonOutput
process foo {
storeDir "./cache"
input:
tuple val(uuid), val(map_value)
output:
tuple val(uuid), path("${uuid}.json"), emit: json
exec:
def outfile = task.workDir.resolve("${uuid}.json")
def json_str = JsonOutput.toJson(map_value)
File out = new File(outfile.toString())
println outfile
out.write(json_str)
}
input_channel = Channel.from("1", "2")
.map {uuid -> [ uuid, [key: "value_${uuid}" ] ] }
workflow {
foo(input_channel)
}
```
### Program output
Error suggesting the file "1.json" or "2.json" is missing from the working directory
### Environment
* Nextflow version: 22.04.5.5708
* Java version: openjdk 11.0.16.1
* Operating system: Linux
* Bash version: 5.1.16
### Additional context
Removing the `storeDir` directive results in the process completing without issue
| process | process using exec fails if using storedir bug report expected behavior and actual behavior when using a process with an exec block defined and a storedir defined the process always fails to find the file despite it existing in task workdir steps to reproduce the problem import groovy json jsonoutput process foo storedir cache input tuple val uuid val map value output tuple val uuid path uuid json emit json exec def outfile task workdir resolve uuid json def json str jsonoutput tojson map value file out new file outfile tostring println outfile out write json str input channel channel from map uuid workflow foo input channel program output error suggesting the file json or json is missing from the working directory environment nextflow version java version openjdk operating system linux bash version additional context removing the storedir directive results in the process completing without issue | 1 |
1,147 | 3,633,331,880 | IssuesEvent | 2016-02-11 14:12:34 | matz-e/lobster | https://api.github.com/repos/matz-e/lobster | closed | Need to refresh Lobster settings while running | enhancement processing | Should be done by opening a socket in the Lobster working dir, and reading values from that periodically to adjust, with a simple protocol. A Lobster command could then talk to the socket, interfacing with the user. After changing values, the resulting changes should be re-pickled. | 1.0 | Need to refresh Lobster settings while running - Should be done by opening a socket in the Lobster working dir, and reading values from that periodically to adjust, with a simple protocol. A Lobster command could then talk to the socket, interfacing with the user. After changing values, the resulting changes should be re-pickled. | process | need to refresh lobster settings while running should be done by opening a socket in the lobster working dir and reading values from that periodically to adjust with a simple protocol a lobster command could then talk to the socket interfacing with the user after changing values the resulting changes should be re pickled | 1 |
11,789 | 14,617,835,134 | IssuesEvent | 2020-12-22 15:21:01 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | opened | crash in graphical modeler due to QList index invalidation | Bug Crash/Data Corruption Processing | **Describe the bug**
crash in Processing Graphical Modeler due to index invalidation
@nyalldawson Found another race condition that was fixed in:
https://github.com/qgis/QGIS/pull/39200
and
https://github.com/qgis/QGIS/pull/39009
**How to Reproduce**
I will attach comlex model taht generate the issue btw seems the issue happen when
1. create a main model using another sub-model linking all input to the sub-model
2. update sub model adding new not hidden input
3. edit main-model => crash
**QGIS and OS versions**
QGIS version | 3.16.1-Hannover | QGIS code revision | 37972328b7
-- | -- | -- | --
Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8
Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1
PostgreSQL Client Version | 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1) | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2
Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020
OS Version | Ubuntu 20.04.1 LTS | This copy of QGIS writes debugging output.
| 1.0 | crash in graphical modeler due to QList index invalidation - **Describe the bug**
crash in Processing Graphical Modeler due to index invalidation
@nyalldawson Found another race condition that was fixed in:
https://github.com/qgis/QGIS/pull/39200
and
https://github.com/qgis/QGIS/pull/39009
**How to Reproduce**
I will attach comlex model taht generate the issue btw seems the issue happen when
1. create a main model using another sub-model linking all input to the sub-model
2. update sub model adding new not hidden input
3. edit main-model => crash
**QGIS and OS versions**
QGIS version | 3.16.1-Hannover | QGIS code revision | 37972328b7
-- | -- | -- | --
Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8
Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1
PostgreSQL Client Version | 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1) | SpatiaLite Version | 4.3.0a
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2
Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020
OS Version | Ubuntu 20.04.1 LTS | This copy of QGIS writes debugging output.
| process | crash in graphical modeler due to qlist index invalidation describe the bug crash in processing graphical modeler due to index invalidation nyalldawson found another race condition that was fixed in and how to reproduce i will attach comlex model taht generate the issue btw seems the issue happen when create a main model using another sub model linking all input to the sub model update sub model adding new not hidden input edit main model crash qgis and os versions qgis version hannover qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version ubuntu spatialite version qwt version version compiled against proj running against proj rel february os version ubuntu lts this copy of qgis writes debugging output | 1 |
14,029 | 16,827,248,267 | IssuesEvent | 2021-06-17 20:23:51 | googleapis/python-vision | https://api.github.com/repos/googleapis/python-vision | closed | TextDetectionParams does not support all documented fields | api: vision type: process | The current definition of the class `google.cloud.vision_v1.types.TextDetectionParams` does not support all of the parameters that are documented in the Google Cloud Vision API. If possible, it would be good to get the additional parameters supported.
#### Environment details
- OS type and version: macOS 10.13.6
- Python version: 3.9.1
- pip version: 21.0.1
- `google-cloud-vision` version:
```
Name: google-cloud-vision
Version: 2.3.1
Summary: Cloud Vision API API client library
Home-page: https://github.com/googleapis/python-vision
Author: Google LLC
Author-email: googleapis-packages@google.com
License: Apache 2.0
Location: /usr/local/lib/python3.9/site-packages
Requires: google-api-core, proto-plus
Required-by:
```
#### Steps to reproduce
According to the documentation at https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.TextDetectionParams, the `TextDetectionParams` object has a number of possible field values. However, the corresponding class in the Python package currently (as of 2.3.1) _only_ has `enable_text_detection_confidence_score`, as can be seen in the documentation at https://googleapis.dev/python/vision/latest/vision_v1/types.html?highlight=textdetection#google.cloud.vision_v1.types.TextDetectionParams and also the source code
#### Code example
This doesn't seem necessary; the issue is pretty clearly evident from both the API docs and the code itself.
#### Stack trace
Not relevant. | 1.0 | TextDetectionParams does not support all documented fields - The current definition of the class `google.cloud.vision_v1.types.TextDetectionParams` does not support all of the parameters that are documented in the Google Cloud Vision API. If possible, it would be good to get the additional parameters supported.
#### Environment details
- OS type and version: macOS 10.13.6
- Python version: 3.9.1
- pip version: 21.0.1
- `google-cloud-vision` version:
```
Name: google-cloud-vision
Version: 2.3.1
Summary: Cloud Vision API API client library
Home-page: https://github.com/googleapis/python-vision
Author: Google LLC
Author-email: googleapis-packages@google.com
License: Apache 2.0
Location: /usr/local/lib/python3.9/site-packages
Requires: google-api-core, proto-plus
Required-by:
```
#### Steps to reproduce
According to the documentation at https://cloud.google.com/vision/docs/reference/rpc/google.cloud.vision.v1#google.cloud.vision.v1.TextDetectionParams, the `TextDetectionParams` object has a number of possible field values. However, the corresponding class in the Python package currently (as of 2.3.1) _only_ has `enable_text_detection_confidence_score`, as can be seen in the documentation at https://googleapis.dev/python/vision/latest/vision_v1/types.html?highlight=textdetection#google.cloud.vision_v1.types.TextDetectionParams and also the source code
#### Code example
This doesn't seem necessary; the issue is pretty clearly evident from both the API docs and the code itself.
#### Stack trace
Not relevant. | process | textdetectionparams does not support all documented fields the current definition of the class google cloud vision types textdetectionparams does not support all of the parameters that are documented in the google cloud vision api if possible it would be good to get the additional parameters supported environment details os type and version macos python version pip version google cloud vision version name google cloud vision version summary cloud vision api api client library home page author google llc author email googleapis packages google com license apache location usr local lib site packages requires google api core proto plus required by steps to reproduce according to the documentation at the textdetectionparams object has a number of possible field values however the corresponding class in the python package currently as of only has enable text detection confidence score as can be seen in the documentation at and also the source code code example this doesn t seem necessary the issue is pretty clearly evident from both the api docs and the code itself stack trace not relevant | 1 |
196,955 | 22,571,984,102 | IssuesEvent | 2022-06-28 01:43:30 | ChoeMinji/react-16.0.0 | https://api.github.com/repos/ChoeMinji/react-16.0.0 | opened | CVE-2021-42740 (High) detected in shell-quote-1.6.1.tgz | security vulnerability | ## CVE-2021-42740 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>shell-quote-1.6.1.tgz</b></p></summary>
<p>quote and parse shell commands</p>
<p>Library home page: <a href="https://registry.npmjs.org/shell-quote/-/shell-quote-1.6.1.tgz">https://registry.npmjs.org/shell-quote/-/shell-quote-1.6.1.tgz</a></p>
<p>Path to dependency file: /fixtures/dom/package.json</p>
<p>Path to vulnerable library: /fixtures/dom/node_modules/shell-quote/package.json,/fixtures/attribute-behavior/node_modules/shell-quote/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.11.tgz (Root Library)
- react-dev-utils-3.1.1.tgz
- :x: **shell-quote-1.6.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react-16.0.0/commit/b9bd902dad80b8b5fa55a183526357266ae47bcc">b9bd902dad80b8b5fa55a183526357266ae47bcc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The shell-quote package before 1.7.3 for Node.js allows command injection. An attacker can inject unescaped shell metacharacters through a regex designed to support Windows drive letters. If the output of this package is passed to a real shell as a quoted argument to a command with exec(), an attacker can inject arbitrary commands. This is because the Windows drive letter regex character class is {A-z] instead of the correct {A-Za-z]. Several shell metacharacters exist in the space between capital letter Z and lower case letter a, such as the backtick character.
<p>Publish Date: 2021-10-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42740>CVE-2021-42740</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-42740">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-42740</a></p>
<p>Release Date: 2021-10-21</p>
<p>Fix Resolution: shell-quote - 1.7.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-42740 (High) detected in shell-quote-1.6.1.tgz - ## CVE-2021-42740 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>shell-quote-1.6.1.tgz</b></p></summary>
<p>quote and parse shell commands</p>
<p>Library home page: <a href="https://registry.npmjs.org/shell-quote/-/shell-quote-1.6.1.tgz">https://registry.npmjs.org/shell-quote/-/shell-quote-1.6.1.tgz</a></p>
<p>Path to dependency file: /fixtures/dom/package.json</p>
<p>Path to vulnerable library: /fixtures/dom/node_modules/shell-quote/package.json,/fixtures/attribute-behavior/node_modules/shell-quote/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-1.0.11.tgz (Root Library)
- react-dev-utils-3.1.1.tgz
- :x: **shell-quote-1.6.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react-16.0.0/commit/b9bd902dad80b8b5fa55a183526357266ae47bcc">b9bd902dad80b8b5fa55a183526357266ae47bcc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The shell-quote package before 1.7.3 for Node.js allows command injection. An attacker can inject unescaped shell metacharacters through a regex designed to support Windows drive letters. If the output of this package is passed to a real shell as a quoted argument to a command with exec(), an attacker can inject arbitrary commands. This is because the Windows drive letter regex character class is {A-z] instead of the correct {A-Za-z]. Several shell metacharacters exist in the space between capital letter Z and lower case letter a, such as the backtick character.
<p>Publish Date: 2021-10-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-42740>CVE-2021-42740</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-42740">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-42740</a></p>
<p>Release Date: 2021-10-21</p>
<p>Fix Resolution: shell-quote - 1.7.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in shell quote tgz cve high severity vulnerability vulnerable library shell quote tgz quote and parse shell commands library home page a href path to dependency file fixtures dom package json path to vulnerable library fixtures dom node modules shell quote package json fixtures attribute behavior node modules shell quote package json dependency hierarchy react scripts tgz root library react dev utils tgz x shell quote tgz vulnerable library found in head commit a href found in base branch master vulnerability details the shell quote package before for node js allows command injection an attacker can inject unescaped shell metacharacters through a regex designed to support windows drive letters if the output of this package is passed to a real shell as a quoted argument to a command with exec an attacker can inject arbitrary commands this is because the windows drive letter regex character class is a z instead of the correct a za z several shell metacharacters exist in the space between capital letter z and lower case letter a such as the backtick character publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution shell quote step up your open source security game with mend | 0 |
22,128 | 30,673,581,203 | IssuesEvent | 2023-07-26 02:00:08 | lizhihao6/get-daily-arxiv-noti | https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti | opened | New submissions for Wed, 26 Jul 23 | event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB | ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Imperceptible Physical Attack against Face Recognition Systems via LED Illumination Modulation
- **Authors:** Junbin Fang, Canjian Jiang, You Jiang, Puxi Lin, Zhaojie Chen, Yujing Sun, Siu-Ming Yiu, Zoe L. Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2307.13294
- **Pdf link:** https://arxiv.org/pdf/2307.13294
- **Abstract**
Although face recognition starts to play an important role in our daily life, we need to pay attention that data-driven face recognition vision systems are vulnerable to adversarial attacks. However, the current two categories of adversarial attacks, namely digital attacks and physical attacks both have drawbacks, with the former ones impractical and the latter one conspicuous, high-computational and inexecutable. To address the issues, we propose a practical, executable, inconspicuous and low computational adversarial attack based on LED illumination modulation. To fool the systems, the proposed attack generates imperceptible luminance changes to human eyes through fast intensity modulation of scene LED illumination and uses the rolling shutter effect of CMOS image sensors in face recognition systems to implant luminance information perturbation to the captured face images. In summary,we present a denial-of-service (DoS) attack for face detection and a dodging attack for face verification. We also evaluate their effectiveness against well-known face detection models, Dlib, MTCNN and RetinaFace , and face verification models, Dlib, FaceNet,and ArcFace.The extensive experiments show that the success rates of DoS attacks against face detection models reach 97.67%, 100%, and 100%, respectively, and the success rates of dodging attacks against all face verification models reach 100%.
## Keyword: ISP
### Does Progress On Object Recognition Benchmarks Improve Real-World Generalization?
- **Authors:** Megan Richards, Polina Kirichenko, Diane Bouchacourt, Mark Ibrahim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2307.13136
- **Pdf link:** https://arxiv.org/pdf/2307.13136
- **Abstract**
For more than a decade, researchers have measured progress in object recognition on ImageNet-based generalization benchmarks such as ImageNet-A, -C, and -R. Recent advances in foundation models, trained on orders of magnitude more data, have begun to saturate these standard benchmarks, but remain brittle in practice. This suggests standard benchmarks, which tend to focus on predefined or synthetic changes, may not be sufficient for measuring real world generalization. Consequently, we propose studying generalization across geography as a more realistic measure of progress using two datasets of objects from households across the globe. We conduct an extensive empirical evaluation of progress across nearly 100 vision models up to most recent foundation models. We first identify a progress gap between standard benchmarks and real-world, geographical shifts: progress on ImageNet results in up to 2.5x more progress on standard generalization benchmarks than real-world distribution shifts. Second, we study model generalization across geographies by measuring the disparities in performance across regions, a more fine-grained measure of real world generalization. We observe all models have large geographic disparities, even foundation CLIP models, with differences of 7-20% in accuracy between regions. Counter to modern intuition, we discover progress on standard benchmarks fails to improve geographic disparities and often exacerbates them: geographic disparities between the least performant models and today's best models have more than tripled. Our results suggest scaling alone is insufficient for consistent robustness to real-world distribution shifts. Finally, we highlight in early experiments how simple last layer retraining on more representative, curated data can complement scaling as a promising direction of future work, reducing geographic disparity on both benchmarks by over two-thirds.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Multi-Granularity Prediction with Learnable Fusion for Scene Text Recognition
- **Authors:** Cheng Da, Peng Wang, Cong Yao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.13244
- **Pdf link:** https://arxiv.org/pdf/2307.13244
- **Abstract**
Due to the enormous technical challenges and wide range of applications, scene text recognition (STR) has been an active research topic in computer vision for years. To tackle this tough problem, numerous innovative methods have been successively proposed, and incorporating linguistic knowledge into STR models has recently become a prominent trend. In this work, we first draw inspiration from the recent progress in Vision Transformer (ViT) to construct a conceptually simple yet functionally powerful vision STR model, which is built upon ViT and a tailored Adaptive Addressing and Aggregation (A$^3$) module. It already outperforms most previous state-of-the-art models for scene text recognition, including both pure vision models and language-augmented methods. To integrate linguistic knowledge, we further propose a Multi-Granularity Prediction strategy to inject information from the language modality into the model in an implicit way, \ie, subword representations (BPE and WordPiece) widely used in NLP are introduced into the output space, in addition to the conventional character level representation, while no independent language model (LM) is adopted. To produce the final recognition results, two strategies for effectively fusing the multi-granularity predictions are devised. The resultant algorithm (termed MGP-STR) is able to push the performance envelope of STR to an even higher level. Specifically, MGP-STR achieves an average recognition accuracy of $94\%$ on standard benchmarks for scene text recognition. Moreover, it also achieves state-of-the-art results on widely-used handwritten benchmarks as well as more challenging scene text datasets, demonstrating the generality of the proposed MGP-STR algorithm. The source code and models will be available at: \url{https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/OCR/MGP-STR}.
### Imperceptible Physical Attack against Face Recognition Systems via LED Illumination Modulation
- **Authors:** Junbin Fang, Canjian Jiang, You Jiang, Puxi Lin, Zhaojie Chen, Yujing Sun, Siu-Ming Yiu, Zoe L. Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2307.13294
- **Pdf link:** https://arxiv.org/pdf/2307.13294
- **Abstract**
Although face recognition starts to play an important role in our daily life, we need to pay attention that data-driven face recognition vision systems are vulnerable to adversarial attacks. However, the current two categories of adversarial attacks, namely digital attacks and physical attacks both have drawbacks, with the former ones impractical and the latter one conspicuous, high-computational and inexecutable. To address the issues, we propose a practical, executable, inconspicuous and low computational adversarial attack based on LED illumination modulation. To fool the systems, the proposed attack generates imperceptible luminance changes to human eyes through fast intensity modulation of scene LED illumination and uses the rolling shutter effect of CMOS image sensors in face recognition systems to implant luminance information perturbation to the captured face images. In summary,we present a denial-of-service (DoS) attack for face detection and a dodging attack for face verification. We also evaluate their effectiveness against well-known face detection models, Dlib, MTCNN and RetinaFace , and face verification models, Dlib, FaceNet,and ArcFace.The extensive experiments show that the success rates of DoS attacks against face detection models reach 97.67%, 100%, and 100%, respectively, and the success rates of dodging attacks against all face verification models reach 100%.
### Unlocking the Emotional World of Visual Media: An Overview of the Science, Research, and Impact of Understanding Emotion
- **Authors:** James Z. Wang, Sicheng Zhao, Chenyan Wu, Reginald B. Adams, Michelle G. Newman, Tal Shafir, Rachelle Tsachor
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2307.13463
- **Pdf link:** https://arxiv.org/pdf/2307.13463
- **Abstract**
The emergence of artificial emotional intelligence technology is revolutionizing the fields of computers and robotics, allowing for a new level of communication and understanding of human behavior that was once thought impossible. While recent advancements in deep learning have transformed the field of computer vision, automated understanding of evoked or expressed emotions in visual media remains in its infancy. This foundering stems from the absence of a universally accepted definition of "emotion", coupled with the inherently subjective nature of emotions and their intricate nuances. In this article, we provide a comprehensive, multidisciplinary overview of the field of emotion analysis in visual media, drawing on insights from psychology, engineering, and the arts. We begin by exploring the psychological foundations of emotion and the computational principles that underpin the understanding of emotions from images and videos. We then review the latest research and systems within the field, accentuating the most promising approaches. We also discuss the current technological challenges and limitations of emotion analysis, underscoring the necessity for continued investigation and innovation. We contend that this represents a "Holy Grail" research problem in computing and delineate pivotal directions for future inquiry. Finally, we examine the ethical ramifications of emotion-understanding technologies and contemplate their potential societal impacts. Overall, this article endeavors to equip readers with a deeper understanding of the domain of emotion analysis in visual media and to inspire further research and development in this captivating and rapidly evolving field.
## Keyword: raw image
There is no result
| 2.0 | New submissions for Wed, 26 Jul 23 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Imperceptible Physical Attack against Face Recognition Systems via LED Illumination Modulation
- **Authors:** Junbin Fang, Canjian Jiang, You Jiang, Puxi Lin, Zhaojie Chen, Yujing Sun, Siu-Ming Yiu, Zoe L. Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2307.13294
- **Pdf link:** https://arxiv.org/pdf/2307.13294
- **Abstract**
Although face recognition starts to play an important role in our daily life, we need to pay attention that data-driven face recognition vision systems are vulnerable to adversarial attacks. However, the current two categories of adversarial attacks, namely digital attacks and physical attacks both have drawbacks, with the former ones impractical and the latter one conspicuous, high-computational and inexecutable. To address the issues, we propose a practical, executable, inconspicuous and low computational adversarial attack based on LED illumination modulation. To fool the systems, the proposed attack generates imperceptible luminance changes to human eyes through fast intensity modulation of scene LED illumination and uses the rolling shutter effect of CMOS image sensors in face recognition systems to implant luminance information perturbation to the captured face images. In summary,we present a denial-of-service (DoS) attack for face detection and a dodging attack for face verification. We also evaluate their effectiveness against well-known face detection models, Dlib, MTCNN and RetinaFace , and face verification models, Dlib, FaceNet,and ArcFace.The extensive experiments show that the success rates of DoS attacks against face detection models reach 97.67%, 100%, and 100%, respectively, and the success rates of dodging attacks against all face verification models reach 100%.
## Keyword: ISP
### Does Progress On Object Recognition Benchmarks Improve Real-World Generalization?
- **Authors:** Megan Richards, Polina Kirichenko, Diane Bouchacourt, Mark Ibrahim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2307.13136
- **Pdf link:** https://arxiv.org/pdf/2307.13136
- **Abstract**
For more than a decade, researchers have measured progress in object recognition on ImageNet-based generalization benchmarks such as ImageNet-A, -C, and -R. Recent advances in foundation models, trained on orders of magnitude more data, have begun to saturate these standard benchmarks, but remain brittle in practice. This suggests standard benchmarks, which tend to focus on predefined or synthetic changes, may not be sufficient for measuring real world generalization. Consequently, we propose studying generalization across geography as a more realistic measure of progress using two datasets of objects from households across the globe. We conduct an extensive empirical evaluation of progress across nearly 100 vision models up to most recent foundation models. We first identify a progress gap between standard benchmarks and real-world, geographical shifts: progress on ImageNet results in up to 2.5x more progress on standard generalization benchmarks than real-world distribution shifts. Second, we study model generalization across geographies by measuring the disparities in performance across regions, a more fine-grained measure of real world generalization. We observe all models have large geographic disparities, even foundation CLIP models, with differences of 7-20% in accuracy between regions. Counter to modern intuition, we discover progress on standard benchmarks fails to improve geographic disparities and often exacerbates them: geographic disparities between the least performant models and today's best models have more than tripled. Our results suggest scaling alone is insufficient for consistent robustness to real-world distribution shifts. Finally, we highlight in early experiments how simple last layer retraining on more representative, curated data can complement scaling as a promising direction of future work, reducing geographic disparity on both benchmarks by over two-thirds.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Multi-Granularity Prediction with Learnable Fusion for Scene Text Recognition
- **Authors:** Cheng Da, Peng Wang, Cong Yao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.13244
- **Pdf link:** https://arxiv.org/pdf/2307.13244
- **Abstract**
Due to the enormous technical challenges and wide range of applications, scene text recognition (STR) has been an active research topic in computer vision for years. To tackle this tough problem, numerous innovative methods have been successively proposed, and incorporating linguistic knowledge into STR models has recently become a prominent trend. In this work, we first draw inspiration from the recent progress in Vision Transformer (ViT) to construct a conceptually simple yet functionally powerful vision STR model, which is built upon ViT and a tailored Adaptive Addressing and Aggregation (A$^3$) module. It already outperforms most previous state-of-the-art models for scene text recognition, including both pure vision models and language-augmented methods. To integrate linguistic knowledge, we further propose a Multi-Granularity Prediction strategy to inject information from the language modality into the model in an implicit way, \ie, subword representations (BPE and WordPiece) widely used in NLP are introduced into the output space, in addition to the conventional character level representation, while no independent language model (LM) is adopted. To produce the final recognition results, two strategies for effectively fusing the multi-granularity predictions are devised. The resultant algorithm (termed MGP-STR) is able to push the performance envelope of STR to an even higher level. Specifically, MGP-STR achieves an average recognition accuracy of $94\%$ on standard benchmarks for scene text recognition. Moreover, it also achieves state-of-the-art results on widely-used handwritten benchmarks as well as more challenging scene text datasets, demonstrating the generality of the proposed MGP-STR algorithm. The source code and models will be available at: \url{https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/OCR/MGP-STR}.
### Imperceptible Physical Attack against Face Recognition Systems via LED Illumination Modulation
- **Authors:** Junbin Fang, Canjian Jiang, You Jiang, Puxi Lin, Zhaojie Chen, Yujing Sun, Siu-Ming Yiu, Zoe L. Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2307.13294
- **Pdf link:** https://arxiv.org/pdf/2307.13294
- **Abstract**
Although face recognition starts to play an important role in our daily life, we need to pay attention that data-driven face recognition vision systems are vulnerable to adversarial attacks. However, the current two categories of adversarial attacks, namely digital attacks and physical attacks both have drawbacks, with the former ones impractical and the latter one conspicuous, high-computational and inexecutable. To address the issues, we propose a practical, executable, inconspicuous and low computational adversarial attack based on LED illumination modulation. To fool the systems, the proposed attack generates imperceptible luminance changes to human eyes through fast intensity modulation of scene LED illumination and uses the rolling shutter effect of CMOS image sensors in face recognition systems to implant luminance information perturbation to the captured face images. In summary,we present a denial-of-service (DoS) attack for face detection and a dodging attack for face verification. We also evaluate their effectiveness against well-known face detection models, Dlib, MTCNN and RetinaFace , and face verification models, Dlib, FaceNet,and ArcFace.The extensive experiments show that the success rates of DoS attacks against face detection models reach 97.67%, 100%, and 100%, respectively, and the success rates of dodging attacks against all face verification models reach 100%.
### Unlocking the Emotional World of Visual Media: An Overview of the Science, Research, and Impact of Understanding Emotion
- **Authors:** James Z. Wang, Sicheng Zhao, Chenyan Wu, Reginald B. Adams, Michelle G. Newman, Tal Shafir, Rachelle Tsachor
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2307.13463
- **Pdf link:** https://arxiv.org/pdf/2307.13463
- **Abstract**
The emergence of artificial emotional intelligence technology is revolutionizing the fields of computers and robotics, allowing for a new level of communication and understanding of human behavior that was once thought impossible. While recent advancements in deep learning have transformed the field of computer vision, automated understanding of evoked or expressed emotions in visual media remains in its infancy. This foundering stems from the absence of a universally accepted definition of "emotion", coupled with the inherently subjective nature of emotions and their intricate nuances. In this article, we provide a comprehensive, multidisciplinary overview of the field of emotion analysis in visual media, drawing on insights from psychology, engineering, and the arts. We begin by exploring the psychological foundations of emotion and the computational principles that underpin the understanding of emotions from images and videos. We then review the latest research and systems within the field, accentuating the most promising approaches. We also discuss the current technological challenges and limitations of emotion analysis, underscoring the necessity for continued investigation and innovation. We contend that this represents a "Holy Grail" research problem in computing and delineate pivotal directions for future inquiry. Finally, we examine the ethical ramifications of emotion-understanding technologies and contemplate their potential societal impacts. Overall, this article endeavors to equip readers with a deeper understanding of the domain of emotion analysis in visual media and to inspire further research and development in this captivating and rapidly evolving field.
## Keyword: raw image
There is no result
| process | new submissions for wed jul keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb imperceptible physical attack against face recognition systems via led illumination modulation authors junbin fang canjian jiang you jiang puxi lin zhaojie chen yujing sun siu ming yiu zoe l jiang subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract although face recognition starts to play an important role in our daily life we need to pay attention that data driven face recognition vision systems are vulnerable to adversarial attacks however the current two categories of adversarial attacks namely digital attacks and physical attacks both have drawbacks with the former ones impractical and the latter one conspicuous high computational and inexecutable to address the issues we propose a practical executable inconspicuous and low computational adversarial attack based on led illumination modulation to fool the systems the proposed attack generates imperceptible luminance changes to human eyes through fast intensity modulation of scene led illumination and uses the rolling shutter effect of cmos image sensors in face recognition systems to implant luminance information perturbation to the captured face images in summary we present a denial of service dos attack for face detection and a dodging attack for face verification we also evaluate their effectiveness against well known face detection models dlib mtcnn and retinaface and face verification models dlib facenet and arcface the extensive experiments show that the success rates of dos attacks against face detection models reach and respectively and the success rates of dodging attacks against all face verification models reach keyword isp does progress on object recognition benchmarks improve real world generalization authors megan richards polina kirichenko diane bouchacourt mark ibrahim subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract for more than a decade researchers have measured progress in object recognition on imagenet based generalization benchmarks such as imagenet a c and r recent advances in foundation models trained on orders of magnitude more data have begun to saturate these standard benchmarks but remain brittle in practice this suggests standard benchmarks which tend to focus on predefined or synthetic changes may not be sufficient for measuring real world generalization consequently we propose studying generalization across geography as a more realistic measure of progress using two datasets of objects from households across the globe we conduct an extensive empirical evaluation of progress across nearly vision models up to most recent foundation models we first identify a progress gap between standard benchmarks and real world geographical shifts progress on imagenet results in up to more progress on standard generalization benchmarks than real world distribution shifts second we study model generalization across geographies by measuring the disparities in performance across regions a more fine grained measure of real world generalization we observe all models have large geographic disparities even foundation clip models with differences of in accuracy between regions counter to modern intuition we discover progress on standard benchmarks fails to improve geographic disparities and often exacerbates them geographic disparities between the least performant models and today s best models have more than tripled our results suggest scaling alone is insufficient for consistent robustness to real world distribution shifts finally we highlight in early experiments how simple last layer retraining on more representative curated data can complement scaling as a promising direction of future work reducing geographic disparity on both benchmarks by over two thirds keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw multi granularity prediction with learnable fusion for scene text recognition authors cheng da peng wang cong yao subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract due to the enormous technical challenges and wide range of applications scene text recognition str has been an active research topic in computer vision for years to tackle this tough problem numerous innovative methods have been successively proposed and incorporating linguistic knowledge into str models has recently become a prominent trend in this work we first draw inspiration from the recent progress in vision transformer vit to construct a conceptually simple yet functionally powerful vision str model which is built upon vit and a tailored adaptive addressing and aggregation a module it already outperforms most previous state of the art models for scene text recognition including both pure vision models and language augmented methods to integrate linguistic knowledge we further propose a multi granularity prediction strategy to inject information from the language modality into the model in an implicit way ie subword representations bpe and wordpiece widely used in nlp are introduced into the output space in addition to the conventional character level representation while no independent language model lm is adopted to produce the final recognition results two strategies for effectively fusing the multi granularity predictions are devised the resultant algorithm termed mgp str is able to push the performance envelope of str to an even higher level specifically mgp str achieves an average recognition accuracy of on standard benchmarks for scene text recognition moreover it also achieves state of the art results on widely used handwritten benchmarks as well as more challenging scene text datasets demonstrating the generality of the proposed mgp str algorithm the source code and models will be available at url imperceptible physical attack against face recognition systems via led illumination modulation authors junbin fang canjian jiang you jiang puxi lin zhaojie chen yujing sun siu ming yiu zoe l jiang subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract although face recognition starts to play an important role in our daily life we need to pay attention that data driven face recognition vision systems are vulnerable to adversarial attacks however the current two categories of adversarial attacks namely digital attacks and physical attacks both have drawbacks with the former ones impractical and the latter one conspicuous high computational and inexecutable to address the issues we propose a practical executable inconspicuous and low computational adversarial attack based on led illumination modulation to fool the systems the proposed attack generates imperceptible luminance changes to human eyes through fast intensity modulation of scene led illumination and uses the rolling shutter effect of cmos image sensors in face recognition systems to implant luminance information perturbation to the captured face images in summary we present a denial of service dos attack for face detection and a dodging attack for face verification we also evaluate their effectiveness against well known face detection models dlib mtcnn and retinaface and face verification models dlib facenet and arcface the extensive experiments show that the success rates of dos attacks against face detection models reach and respectively and the success rates of dodging attacks against all face verification models reach unlocking the emotional world of visual media an overview of the science research and impact of understanding emotion authors james z wang sicheng zhao chenyan wu reginald b adams michelle g newman tal shafir rachelle tsachor subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract the emergence of artificial emotional intelligence technology is revolutionizing the fields of computers and robotics allowing for a new level of communication and understanding of human behavior that was once thought impossible while recent advancements in deep learning have transformed the field of computer vision automated understanding of evoked or expressed emotions in visual media remains in its infancy this foundering stems from the absence of a universally accepted definition of emotion coupled with the inherently subjective nature of emotions and their intricate nuances in this article we provide a comprehensive multidisciplinary overview of the field of emotion analysis in visual media drawing on insights from psychology engineering and the arts we begin by exploring the psychological foundations of emotion and the computational principles that underpin the understanding of emotions from images and videos we then review the latest research and systems within the field accentuating the most promising approaches we also discuss the current technological challenges and limitations of emotion analysis underscoring the necessity for continued investigation and innovation we contend that this represents a holy grail research problem in computing and delineate pivotal directions for future inquiry finally we examine the ethical ramifications of emotion understanding technologies and contemplate their potential societal impacts overall this article endeavors to equip readers with a deeper understanding of the domain of emotion analysis in visual media and to inspire further research and development in this captivating and rapidly evolving field keyword raw image there is no result | 1 |
12,041 | 14,738,728,774 | IssuesEvent | 2021-01-07 05:34:17 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Service period dates | anc-process anp-1.5 ant-bug ant-support | In GitLab by @kdjstudios on Jul 17, 2018, 14:05
Hello Team,
During discussion on ticket #966 it came to our attention that the service period dates are two months(billing cycles) ahead, not just the upcoming month(billing cycle).
May we please check all internal and external sites to confirm that this is an isolated issue. | 1.0 | Service period dates - In GitLab by @kdjstudios on Jul 17, 2018, 14:05
Hello Team,
During discussion on ticket #966 it came to our attention that the service period dates are two months(billing cycles) ahead, not just the upcoming month(billing cycle).
May we please check all internal and external sites to confirm that this is an isolated issue. | process | service period dates in gitlab by kdjstudios on jul hello team during discussion on ticket it came to our attention that the service period dates are two months billing cycles ahead not just the upcoming month billing cycle may we please check all internal and external sites to confirm that this is an isolated issue | 1 |
12,665 | 15,035,884,297 | IssuesEvent | 2021-02-02 14:38:58 | panther-labs/panther | https://api.github.com/repos/panther-labs/panther | closed | BE: System status notification system | p1 story team:data processing | ### Description
Backend system for delivering system notification changes to the user
### Related Services
Which backend services must change for this story to be completed?
### Designs
Paste the link to your designs here
### Acceptance Criteria
A concise list of specific user stories that qualify this story as done.
This acts as a checklist and high-level context for anyone reading this issue to verify your implementation.
For example:
- We can collect anonymized frontend crash logs from user browsers
- Users can opt in to send these logs to panther
- The crash logs will contain the following fields : browser version
- Users can opt-out from collection at any time
- ...
| 1.0 | BE: System status notification system - ### Description
Backend system for delivering system notification changes to the user
### Related Services
Which backend services must change for this story to be completed?
### Designs
Paste the link to your designs here
### Acceptance Criteria
A concise list of specific user stories that qualify this story as done.
This acts as a checklist and high-level context for anyone reading this issue to verify your implementation.
For example:
- We can collect anonymized frontend crash logs from user browsers
- Users can opt in to send these logs to panther
- The crash logs will contain the following fields : browser version
- Users can opt-out from collection at any time
- ...
| process | be system status notification system description backend system for delivering system notification changes to the user related services which backend services must change for this story to be completed designs paste the link to your designs here acceptance criteria a concise list of specific user stories that qualify this story as done this acts as a checklist and high level context for anyone reading this issue to verify your implementation for example we can collect anonymized frontend crash logs from user browsers users can opt in to send these logs to panther the crash logs will contain the following fields browser version users can opt out from collection at any time | 1 |
817,627 | 30,646,538,700 | IssuesEvent | 2023-07-25 05:25:57 | elastic/security-docs | https://api.github.com/repos/elastic/security-docs | closed | Expand and refine docs for AI Assistant [8.9.0] | v8.9.0 Priority: High Feature: GenAI Effort: Large | ## Description
The [initial docs](https://github.com/elastic/security-docs/pull/3425) for AI Assistant's MVP release in `8.8.1` were intended to be somewhat minimal, focusing on making sure readers know how to set up the Assistant and use some basic functionality. For `8.9.0`, expand the docs with further nuances, full functionality, best practices & guidance on writing effective prompts, known issues, troubleshooting, and so on.
Also include any new/refined features implemented by development since the 8.8.1 release.
Related (consider breaking features into separate issues/PRs?):
* https://github.com/elastic/security-team/issues/6877 (this also lists issues/PRs for other features in 8.9)
* https://github.com/elastic/kibana/pull/159857
* https://github.com/elastic/kibana/pull/159075
### Background:
* Docs PR for `8.8.1`: https://github.com/elastic/security-docs/pull/3425 ([issue #3420](https://github.com/elastic/security-docs/issues/3420]))
* Dev:
- Epic: https://github.com/elastic/security-team/issues/6775
- Another Epic: https://github.com/elastic/security-team/issues/6475
- Kibana PR: https://github.com/elastic/kibana/pull/156933
- _Another one_: https://github.com/elastic/kibana/pull/159054
### To Do
Connector - @joepeeples
- [x] Create stub page for docs link: creating a role with privileges needed to view/query data on token tracking dashboard
- [x] Add docs for token dashboard, incl privileges needed to view the data (Kibana docs? also mention in Security docs?)
General updates - @joepeeples
- [x] Rename from `Security Assistant` to `AI Assistant`
- [x] Remove feature flag instructions
- [x] Add blurb: we do not store or process user data (@jamesspi to provide draft)
- [x] Update screenshot/GIF of initial chat
Anonymization - @benironside
- [x] Remove data anonymization notes
- [x] Add section on configuring anonymizations (if this gets too long, maybe consider breaking out to separate topic?)
Expand existing features - @benironside
- [x] _System prompts_ - additional configuration options (at start of conversation AND in prompt editor box)
- [x] _Quick prompts_ - more details on usage, create custom prompts, delete (hover to display red X)
- [x] _Action buttons_ - new: select AI model (OpenAI only) - to be merged soon (not in BC3)
| 1.0 | Expand and refine docs for AI Assistant [8.9.0] - ## Description
The [initial docs](https://github.com/elastic/security-docs/pull/3425) for AI Assistant's MVP release in `8.8.1` were intended to be somewhat minimal, focusing on making sure readers know how to set up the Assistant and use some basic functionality. For `8.9.0`, expand the docs with further nuances, full functionality, best practices & guidance on writing effective prompts, known issues, troubleshooting, and so on.
Also include any new/refined features implemented by development since the 8.8.1 release.
Related (consider breaking features into separate issues/PRs?):
* https://github.com/elastic/security-team/issues/6877 (this also lists issues/PRs for other features in 8.9)
* https://github.com/elastic/kibana/pull/159857
* https://github.com/elastic/kibana/pull/159075
### Background:
* Docs PR for `8.8.1`: https://github.com/elastic/security-docs/pull/3425 ([issue #3420](https://github.com/elastic/security-docs/issues/3420]))
* Dev:
- Epic: https://github.com/elastic/security-team/issues/6775
- Another Epic: https://github.com/elastic/security-team/issues/6475
- Kibana PR: https://github.com/elastic/kibana/pull/156933
- _Another one_: https://github.com/elastic/kibana/pull/159054
### To Do
Connector - @joepeeples
- [x] Create stub page for docs link: creating a role with privileges needed to view/query data on token tracking dashboard
- [x] Add docs for token dashboard, incl privileges needed to view the data (Kibana docs? also mention in Security docs?)
General updates - @joepeeples
- [x] Rename from `Security Assistant` to `AI Assistant`
- [x] Remove feature flag instructions
- [x] Add blurb: we do not store or process user data (@jamesspi to provide draft)
- [x] Update screenshot/GIF of initial chat
Anonymization - @benironside
- [x] Remove data anonymization notes
- [x] Add section on configuring anonymizations (if this gets too long, maybe consider breaking out to separate topic?)
Expand existing features - @benironside
- [x] _System prompts_ - additional configuration options (at start of conversation AND in prompt editor box)
- [x] _Quick prompts_ - more details on usage, create custom prompts, delete (hover to display red X)
- [x] _Action buttons_ - new: select AI model (OpenAI only) - to be merged soon (not in BC3)
| non_process | expand and refine docs for ai assistant description the for ai assistant s mvp release in were intended to be somewhat minimal focusing on making sure readers know how to set up the assistant and use some basic functionality for expand the docs with further nuances full functionality best practices guidance on writing effective prompts known issues troubleshooting and so on also include any new refined features implemented by development since the release related consider breaking features into separate issues prs this also lists issues prs for other features in background docs pr for dev epic another epic kibana pr another one to do connector joepeeples create stub page for docs link creating a role with privileges needed to view query data on token tracking dashboard add docs for token dashboard incl privileges needed to view the data kibana docs also mention in security docs general updates joepeeples rename from security assistant to ai assistant remove feature flag instructions add blurb we do not store or process user data jamesspi to provide draft update screenshot gif of initial chat anonymization benironside remove data anonymization notes add section on configuring anonymizations if this gets too long maybe consider breaking out to separate topic expand existing features benironside system prompts additional configuration options at start of conversation and in prompt editor box quick prompts more details on usage create custom prompts delete hover to display red x action buttons new select ai model openai only to be merged soon not in | 0 |
25,499 | 7,720,218,487 | IssuesEvent | 2018-05-23 22:08:46 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | bins/msan/server_fuzzer_one_entry: server fuzzer assertion | infra/BUILDPONY kind/bug lang/core priority/P1 | https://source.cloud.google.com/results/invocations/883a4224-a262-40f1-93d6-5d2a5e5d6381/targets/github%2Fgrpc%2Fc_linux_msan_native/tests
```
D0518 17:44:29.769866816 31719 ev_posix.cc:145] Using polling engine: none
D0518 17:44:29.777853260 31719 dns_resolver.cc:339] Using native dns resolver
E0518 17:44:29.801172761 31719 server_fuzzer.cc:109] assertion failed: ev.type == GRPC_OP_COMPLETE
``` | 1.0 | bins/msan/server_fuzzer_one_entry: server fuzzer assertion - https://source.cloud.google.com/results/invocations/883a4224-a262-40f1-93d6-5d2a5e5d6381/targets/github%2Fgrpc%2Fc_linux_msan_native/tests
```
D0518 17:44:29.769866816 31719 ev_posix.cc:145] Using polling engine: none
D0518 17:44:29.777853260 31719 dns_resolver.cc:339] Using native dns resolver
E0518 17:44:29.801172761 31719 server_fuzzer.cc:109] assertion failed: ev.type == GRPC_OP_COMPLETE
``` | non_process | bins msan server fuzzer one entry server fuzzer assertion ev posix cc using polling engine none dns resolver cc using native dns resolver server fuzzer cc assertion failed ev type grpc op complete | 0 |
351,241 | 10,514,562,759 | IssuesEvent | 2019-09-28 01:39:09 | AY1920S1-CS2113T-W17-4/main | https://api.github.com/repos/AY1920S1-CS2113T-W17-4/main | opened | As a Business Analytics student, I can view trends for my tasks | priority.Low type.Story | so I can see if I am lagging behind in my tasks. | 1.0 | As a Business Analytics student, I can view trends for my tasks - so I can see if I am lagging behind in my tasks. | non_process | as a business analytics student i can view trends for my tasks so i can see if i am lagging behind in my tasks | 0 |
124,264 | 17,772,515,183 | IssuesEvent | 2021-08-30 15:09:09 | kapseliboi/Vue2Leaflet | https://api.github.com/repos/kapseliboi/Vue2Leaflet | opened | CVE-2019-8331 (Medium) detected in bootstrap-3.3.5.min.js | security vulnerability | ## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.5.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js</a></p>
<p>Path to dependency file: Vue2Leaflet/node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>Path to vulnerable library: /node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.5.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/Vue2Leaflet/commit/53817e18041a05f9f6ac4b02e9520262cf910bcf">53817e18041a05f9f6ac4b02e9520262cf910bcf</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-8331 (Medium) detected in bootstrap-3.3.5.min.js - ## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.5.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js</a></p>
<p>Path to dependency file: Vue2Leaflet/node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>Path to vulnerable library: /node_modules/autocomplete.js/test/playground_jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.5.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/Vue2Leaflet/commit/53817e18041a05f9f6ac4b02e9520262cf910bcf">53817e18041a05f9f6ac4b02e9520262cf910bcf</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file node modules autocomplete js test playground jquery html path to vulnerable library node modules autocomplete js test playground jquery html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap before and x before xss is possible in the tooltip or popover data template attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap bootstrap sass step up your open source security game with whitesource | 0 |
20,478 | 27,135,518,275 | IssuesEvent | 2023-02-16 12:57:35 | vesoft-inc/nebula | https://api.github.com/repos/vesoft-inc/nebula | reopened | The storaged service keeps sending snapshots and an infinite loop is embedded. | type/bug severity/none process/fixed affects/none | **Please check the FAQ documentation before raising an issue**
<!-- Please check the [FAQ](https://docs.nebula-graph.com.cn/master/20.appendix/0.FAQ/) documentation and old issues before raising an issue in case someone has asked the same question that you are asking. -->
**Describe the bug (__required__)**
The storaged log records that the storage keeps synchronizing snapshots but fails. The storage keeps performing operations on the same commitlogid.
this bug is the same as this issue https://discuss.nebula-graph.com.cn/t/topic/11085
```
to 10485760, batch size is 1048576
I20230216 10:52:45.735262 3002064 NebulaSnapshotManager.cpp:67] Space 10 Part 41 start send snapshot of commitLogId 72743 commitLogTerm 9, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.735381 3002050 NebulaSnapshotManager.cpp:67] Space 6 Part 94 start send snapshot of commitLogId 67428 commitLogTerm 58, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.736137 3002053 NebulaSnapshotManager.cpp:67] Space 6 Part 96 start send snapshot of commitLogId 66841 commitLogTerm 97, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.736299 3002063 NebulaSnapshotManager.cpp:67] Space 77 Part 61 start send snapshot of commitLogId 3177856 commitLogTerm 5, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.736881 3002064 NebulaSnapshotManager.cpp:67] Space 4 Part 10 start send snapshot of commitLogId 69477 commitLogTerm 14, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.737025 3002050 NebulaSnapshotManager.cpp:67] Space 73 Part 13 start send snapshot of commitLogId 3403616 commitLogTerm 7, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.737371 3002053 NebulaSnapshotManager.cpp:67] Space 76 Part 14 start send snapshot of commitLogId 3489696 commitLogTerm 6, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.737464 3002063 NebulaSnapshotManager.cpp:67] Space 6 Part 98 start send snapshot of commitLogId 63176 commitLogTerm 14, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738029 3002064 NebulaSnapshotManager.cpp:67] Space 75 Part 92 start send snapshot of commitLogId 2041251 commitLogTerm 12, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738067 3002050 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738116 3002053 NebulaSnapshotManager.cpp:67] Space 3 Part 41 start send snapshot of commitLogId 67615 commitLogTerm 16, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738318 3002064 NebulaSnapshotManager.cpp:67] Space 6 Part 81 start send snapshot of commitLogId 62274 commitLogTerm 27, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738319 3002063 NebulaSnapshotManager.cpp:67] Space 2 Part 32 start send snapshot of commitLogId 68144 commitLogTerm 16, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738350 3002063 NebulaSnapshotManager.cpp:67] Space 74 Part 20 start send snapshot of commitLogId 3312870 commitLogTerm 105, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738353 3002050 NebulaSnapshotManager.cpp:67] Space 72 Part 95 start send snapshot of commitLogId 2172354 commitLogTerm 4, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738364 3002053 NebulaSnapshotManager.cpp:67] Space 4 Part 48 start send snapshot of commitLogId 67348 commitLogTerm 29, rate limited to 10485760, batch size is 1048576
```
```
[2023-02-16 11:04:46] (10.97.162.200@tysearch)> tail -f logs/nebula-storaged.INFO | grep 67170
I20230216 11:06:36.106393 3002050 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:36.144593 3002064 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:36.185752 3002064 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:36.227294 3002063 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:36.274462 3002063 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:36.316789 3002050 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:40.240387 3002053 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:40.255889 3002053 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:40.297564 3002050 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:40.337299 3002053 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:40.378110 3002050 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
```
<!-- A clear and concise description of what the bug is. -->
**Your Environments (__required__)**
- os
```
Linux host-7-219-10-133 3.10.0-862.14.1.1.h224.eulerosv2r7.x86_64 #1 SMP Tue Feb 12 00:00:00 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
```
- g++
```
g++ (GCC) 10.2.0
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
```
- lscpu
```
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6266C CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 30976K
NUMA node0 CPU(s): 0-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
commit id: https://github.com/vesoft-inc/nebula/commit/2e938c767cce4507ee0ea767e8dd2bf7bd1711ca
```
* OS: `uname -a`
* Compiler: `g++ --version` or `clang++ --version`
* CPU: `lscpu`
* Commit id (e.g. `a3ffc7d8`)
**How To Reproduce(__required__)**
Steps to reproduce the behavior:
1. Step 1
2. Step 2
3. Step 3
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Additional context**
<!-- Provide logs and configs, or any other context to trace the problem. -->
| 1.0 | The storaged service keeps sending snapshots and an infinite loop is embedded. - **Please check the FAQ documentation before raising an issue**
<!-- Please check the [FAQ](https://docs.nebula-graph.com.cn/master/20.appendix/0.FAQ/) documentation and old issues before raising an issue in case someone has asked the same question that you are asking. -->
**Describe the bug (__required__)**
The storaged log records that the storage keeps synchronizing snapshots but fails. The storage keeps performing operations on the same commitlogid.
this bug is the same as this issue https://discuss.nebula-graph.com.cn/t/topic/11085
```
to 10485760, batch size is 1048576
I20230216 10:52:45.735262 3002064 NebulaSnapshotManager.cpp:67] Space 10 Part 41 start send snapshot of commitLogId 72743 commitLogTerm 9, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.735381 3002050 NebulaSnapshotManager.cpp:67] Space 6 Part 94 start send snapshot of commitLogId 67428 commitLogTerm 58, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.736137 3002053 NebulaSnapshotManager.cpp:67] Space 6 Part 96 start send snapshot of commitLogId 66841 commitLogTerm 97, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.736299 3002063 NebulaSnapshotManager.cpp:67] Space 77 Part 61 start send snapshot of commitLogId 3177856 commitLogTerm 5, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.736881 3002064 NebulaSnapshotManager.cpp:67] Space 4 Part 10 start send snapshot of commitLogId 69477 commitLogTerm 14, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.737025 3002050 NebulaSnapshotManager.cpp:67] Space 73 Part 13 start send snapshot of commitLogId 3403616 commitLogTerm 7, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.737371 3002053 NebulaSnapshotManager.cpp:67] Space 76 Part 14 start send snapshot of commitLogId 3489696 commitLogTerm 6, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.737464 3002063 NebulaSnapshotManager.cpp:67] Space 6 Part 98 start send snapshot of commitLogId 63176 commitLogTerm 14, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738029 3002064 NebulaSnapshotManager.cpp:67] Space 75 Part 92 start send snapshot of commitLogId 2041251 commitLogTerm 12, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738067 3002050 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738116 3002053 NebulaSnapshotManager.cpp:67] Space 3 Part 41 start send snapshot of commitLogId 67615 commitLogTerm 16, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738318 3002064 NebulaSnapshotManager.cpp:67] Space 6 Part 81 start send snapshot of commitLogId 62274 commitLogTerm 27, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738319 3002063 NebulaSnapshotManager.cpp:67] Space 2 Part 32 start send snapshot of commitLogId 68144 commitLogTerm 16, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738350 3002063 NebulaSnapshotManager.cpp:67] Space 74 Part 20 start send snapshot of commitLogId 3312870 commitLogTerm 105, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738353 3002050 NebulaSnapshotManager.cpp:67] Space 72 Part 95 start send snapshot of commitLogId 2172354 commitLogTerm 4, rate limited to 10485760, batch size is 1048576
I20230216 10:52:45.738364 3002053 NebulaSnapshotManager.cpp:67] Space 4 Part 48 start send snapshot of commitLogId 67348 commitLogTerm 29, rate limited to 10485760, batch size is 1048576
```
```
[2023-02-16 11:04:46] (10.97.162.200@tysearch)> tail -f logs/nebula-storaged.INFO | grep 67170
I20230216 11:06:36.106393 3002050 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:36.144593 3002064 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:36.185752 3002064 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:36.227294 3002063 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:36.274462 3002063 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:36.316789 3002050 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:40.240387 3002053 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:40.255889 3002053 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:40.297564 3002050 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:40.337299 3002053 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
I20230216 11:06:40.378110 3002050 NebulaSnapshotManager.cpp:67] Space 8 Part 32 start send snapshot of commitLogId 67170 commitLogTerm 8, rate limited to 10485760, batch size is 1048576
```
<!-- A clear and concise description of what the bug is. -->
**Your Environments (__required__)**
- os
```
Linux host-7-219-10-133 3.10.0-862.14.1.1.h224.eulerosv2r7.x86_64 #1 SMP Tue Feb 12 00:00:00 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
```
- g++
```
g++ (GCC) 10.2.0
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
```
- lscpu
```
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6266C CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3000.000
BogoMIPS: 6000.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 30976K
NUMA node0 CPU(s): 0-15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
commit id: https://github.com/vesoft-inc/nebula/commit/2e938c767cce4507ee0ea767e8dd2bf7bd1711ca
```
* OS: `uname -a`
* Compiler: `g++ --version` or `clang++ --version`
* CPU: `lscpu`
* Commit id (e.g. `a3ffc7d8`)
**How To Reproduce(__required__)**
Steps to reproduce the behavior:
1. Step 1
2. Step 2
3. Step 3
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Additional context**
<!-- Provide logs and configs, or any other context to trace the problem. -->
| process | the storaged service keeps sending snapshots and an infinite loop is embedded please check the faq documentation before raising an issue describe the bug required the storaged log records that the storage keeps synchronizing snapshots but fails the storage keeps performing operations on the same commitlogid this bug is the same as this issue to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is tysearch tail f logs nebula storaged info grep nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is nebulasnapshotmanager cpp space part start send snapshot of commitlogid commitlogterm rate limited to batch size is your environments required os linux host smp tue feb utc gnu linux g g gcc copyright c free software foundation inc this is free software see the source for copying conditions there is no warranty not even for merchantability or fitness for a particular purpose lscpu architecture cpu op mode s bit bit byte order little endian cpu s on line cpu s list thread s per core core s per socket socket s numa node s vendor id genuineintel cpu family model model name intel r xeon r gold cpu stepping cpu mhz bogomips hypervisor vendor kvm virtualization type full cache cache cache cache numa cpu s flags fpu vme de pse tsc msr pae mce apic sep mtrr pge mca cmov pat clflush mmx fxsr sse ss ht syscall nx rdtscp lm constant tsc rep good nopl xtopology nonstop tsc eagerfpu pni pclmulqdq fma pcid movbe popcnt tsc deadline timer aes xsave avx rdrand hypervisor lahf lm abm ssbd ibrs ibpb stibp fsgsbase tsc adjust hle smep erms invpcid rtm mpx rdseed adx smap clflushopt clwb xsaveopt xsavec arat vnni md clear spec ctrl intel stibp flush arch capabilities commit id os uname a compiler g version or clang version cpu lscpu commit id e g how to reproduce required steps to reproduce the behavior step step step expected behavior additional context | 1 |
1,811 | 4,553,853,277 | IssuesEvent | 2016-09-13 07:14:54 | openvstorage/volumedriver | https://api.github.com/repos/openvstorage/volumedriver | closed | Excessive logging if no voldrv cluster cache is available | priority_urgent process_wontfix type_bug | This issue was encountered before when we were still using a clustercache and the device hosting the clustercache died. Cfr http://jira.openvstorage.com/browse/OVS-4065
In this case however there was no clustercache configured at all
```
{
"read_cache_serialization_path": "/var/rsp/vmstor",
"clustercache_mount_points": []
}
```
Log file was flooded with messages like
```
2016-09-08 09:51:10 143233 +0200 - ovs-02.be-gen8-2 - 3641/0x00007fd274fe9700 - volumedriverfs/ClusterClusterCache - 0000000019dc42e8 - warning - add: Failed to allocate an entry for handle 2850 - are all devices gone or all entries consumed by other namespaces?
2016-09-08 09:51:10 144676 +0200 - ovs-02.be-gen8-2 - 3641/0x00007fd2697fa700 - volumedriverfs/ClusterClusterCache - 0000000019dc42e9 - warning - add: Failed to allocate an entry for handle 2853 - are all devices gone or all entries consumed by other namespaces?
2016-09-08 09:51:10 144811 +0200 - ovs-02.be-gen8-2 - 3641/0x00007fd279016700 - volumedriverfs/ClusterClusterCache - 0000000019dc42ea - warning - add: Failed to allocate an entry for handle 2853 - are all devices gone or all entries consumed by other namespaces?
```
| 1.0 | Excessive logging if no voldrv cluster cache is available - This issue was encountered before when we were still using a clustercache and the device hosting the clustercache died. Cfr http://jira.openvstorage.com/browse/OVS-4065
In this case however there was no clustercache configured at all
```
{
"read_cache_serialization_path": "/var/rsp/vmstor",
"clustercache_mount_points": []
}
```
Log file was flooded with messages like
```
2016-09-08 09:51:10 143233 +0200 - ovs-02.be-gen8-2 - 3641/0x00007fd274fe9700 - volumedriverfs/ClusterClusterCache - 0000000019dc42e8 - warning - add: Failed to allocate an entry for handle 2850 - are all devices gone or all entries consumed by other namespaces?
2016-09-08 09:51:10 144676 +0200 - ovs-02.be-gen8-2 - 3641/0x00007fd2697fa700 - volumedriverfs/ClusterClusterCache - 0000000019dc42e9 - warning - add: Failed to allocate an entry for handle 2853 - are all devices gone or all entries consumed by other namespaces?
2016-09-08 09:51:10 144811 +0200 - ovs-02.be-gen8-2 - 3641/0x00007fd279016700 - volumedriverfs/ClusterClusterCache - 0000000019dc42ea - warning - add: Failed to allocate an entry for handle 2853 - are all devices gone or all entries consumed by other namespaces?
```
| process | excessive logging if no voldrv cluster cache is available this issue was encountered before when we were still using a clustercache and the device hosting the clustercache died cfr in this case however there was no clustercache configured at all read cache serialization path var rsp vmstor clustercache mount points log file was flooded with messages like ovs be volumedriverfs clusterclustercache warning add failed to allocate an entry for handle are all devices gone or all entries consumed by other namespaces ovs be volumedriverfs clusterclustercache warning add failed to allocate an entry for handle are all devices gone or all entries consumed by other namespaces ovs be volumedriverfs clusterclustercache warning add failed to allocate an entry for handle are all devices gone or all entries consumed by other namespaces | 1 |
14,393 | 3,265,799,115 | IssuesEvent | 2015-10-22 17:50:20 | rubytaiwan/AMA | https://api.github.com/repos/rubytaiwan/AMA | closed | Mobile App UI 設計問題 | Android Design iOS | 例如要設計一個 App 的界面,希望這個界面可以在 iOS 跟 Android 系統上都長一樣。
會用哪個尺寸開始設計,以及怎麼去適應到其他尺寸,並出圖給工程師。
謝謝! | 1.0 | Mobile App UI 設計問題 - 例如要設計一個 App 的界面,希望這個界面可以在 iOS 跟 Android 系統上都長一樣。
會用哪個尺寸開始設計,以及怎麼去適應到其他尺寸,並出圖給工程師。
謝謝! | non_process | mobile app ui 設計問題 例如要設計一個 app 的界面,希望這個界面可以在 ios 跟 android 系統上都長一樣。 會用哪個尺寸開始設計,以及怎麼去適應到其他尺寸,並出圖給工程師。 謝謝! | 0 |
53,961 | 13,233,848,499 | IssuesEvent | 2020-08-18 15:23:08 | golang/go | https://api.github.com/repos/golang/go | closed | x/build: add Amazon EC2 ARM instances for builders | Builders NeedsInvestigation | As @bradfitz noted, we should explore adding ephemeral AWS ARM instances for both 32-bit and 64-bit ARM builds.
@toothrot @dmitshur | 1.0 | x/build: add Amazon EC2 ARM instances for builders - As @bradfitz noted, we should explore adding ephemeral AWS ARM instances for both 32-bit and 64-bit ARM builds.
@toothrot @dmitshur | non_process | x build add amazon arm instances for builders as bradfitz noted we should explore adding ephemeral aws arm instances for both bit and bit arm builds toothrot dmitshur | 0 |
291,904 | 21,943,704,046 | IssuesEvent | 2022-05-23 21:04:27 | google/jax | https://api.github.com/repos/google/jax | closed | non-Python type signatures in docs are confusing for a Python package | documentation | The docs include confusing type signatures, using what I assume is Haskell syntax (please correct me if I'm wrong---I never learned Haskell). Two examples I found are:
https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html#jax.lax.scan
https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.fori_loop.html#jax.lax.fori_loop
PS: sorry, I didn't see an option to report an issue with the docs, so just used the "bugs" issue template. | 1.0 | non-Python type signatures in docs are confusing for a Python package - The docs include confusing type signatures, using what I assume is Haskell syntax (please correct me if I'm wrong---I never learned Haskell). Two examples I found are:
https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.scan.html#jax.lax.scan
https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.fori_loop.html#jax.lax.fori_loop
PS: sorry, I didn't see an option to report an issue with the docs, so just used the "bugs" issue template. | non_process | non python type signatures in docs are confusing for a python package the docs include confusing type signatures using what i assume is haskell syntax please correct me if i m wrong i never learned haskell two examples i found are ps sorry i didn t see an option to report an issue with the docs so just used the bugs issue template | 0 |
9,141 | 11,169,947,315 | IssuesEvent | 2019-12-28 09:54:58 | hanubeki/noteskin-hanubeki | https://api.github.com/repos/hanubeki/noteskin-hanubeki | opened | PSA: For 5.3 OutFox users | incompatibility | For 5.3 OutFox users, get [5.0-final](https://github.com/hanubeki/noteskin-hanubeki/tree/5.0-final) tag and put them into Appearance/NoteSkins/global directory.
| True | PSA: For 5.3 OutFox users - For 5.3 OutFox users, get [5.0-final](https://github.com/hanubeki/noteskin-hanubeki/tree/5.0-final) tag and put them into Appearance/NoteSkins/global directory.
| non_process | psa for outfox users for outfox users get tag and put them into appearance noteskins global directory | 0 |
2,823 | 3,201,536,260 | IssuesEvent | 2015-10-02 08:06:07 | choko/MT-ComparEval | https://api.github.com/repos/choko/MT-ComparEval | opened | I cannot see all metrics | usability issue | The new version of MT ComparEval has more metrics. So the table that displays the metrics when I compare two tasks is wider than before:

It is so wide it does not fit. I have to scroll right to see all metrics. When I want to use the scroll bar which is on the bottom of a long page then I have to scroll quickly: It disappears quickly because new rows are loaded.
On my screen there is a lot of unused space left and right of the content area. A wider content area would solve this problem. | True | I cannot see all metrics - The new version of MT ComparEval has more metrics. So the table that displays the metrics when I compare two tasks is wider than before:

It is so wide it does not fit. I have to scroll right to see all metrics. When I want to use the scroll bar which is on the bottom of a long page then I have to scroll quickly: It disappears quickly because new rows are loaded.
On my screen there is a lot of unused space left and right of the content area. A wider content area would solve this problem. | non_process | i cannot see all metrics the new version of mt compareval has more metrics so the table that displays the metrics when i compare two tasks is wider than before it is so wide it does not fit i have to scroll right to see all metrics when i want to use the scroll bar which is on the bottom of a long page then i have to scroll quickly it disappears quickly because new rows are loaded on my screen there is a lot of unused space left and right of the content area a wider content area would solve this problem | 0 |
229,921 | 17,594,677,415 | IssuesEvent | 2021-08-17 02:12:34 | jcontrolresearch/DataStructure | https://api.github.com/repos/jcontrolresearch/DataStructure | opened | Listas enlazadas | documentation enhancement question | 
Escriba un programa que utilice los ejemplos en clase sobre las funciones de listas enlazadas.
Puede encontrar el código aquí: [Listas enlazadas](https://github.com/jcontrolresearch/DataStructure/tree/main/Code/Chapter_1-LinkedList/Book_Example)
Lineamientos del programa:
- Hacer un menu de biblioteca: Las opciones van a ser las siguientes
1. [1] Agregar un nuevo libro: Preguntar al usuario el titulo, autor e isbn
2. [2] Listar los libros: titulo | autor | isbn / si no hay elementos, desplegar: tu biblioteca esta vacia
3. [3] Eliminar libro: Preguntar donde lo queremos eliminar. 1) Usar un submenu / 2) Pedir n y verificar si n=0 eliminar el principio, si n = long(lista) elimanr el final sino eliminan en la posicion n
4. [4] Cuantos libros tengo? : Desplegar los libros
Para su evaluacion considere entregar:
1. Codigo basado en las [rubricas de evalucion](https://github.com/jcontrolresearch/DataStructure/blob/main/Documents/Guidelines/Rubricas%20de%20evalucion.xlsx)
2. Documento de prueba de ejecucion del codigo [One-pager](https://github.com/jcontrolresearch/DataStructure/blob/main/Documents/Guidelines/Chapter_1-Linked-List/One_pager_example.pptx)
Las pruebas minimas necesarias e instrucciones del documento estan incluidos en el [one_pager](https://github.com/jcontrolresearch/DataStructure/blob/main/Documents/Guidelines/Chapter_1-Linked-List/One_pager_example.pptx)
Si tiene alguna duda no duden en contactarme.
Have fun! | 1.0 | Listas enlazadas - 
Escriba un programa que utilice los ejemplos en clase sobre las funciones de listas enlazadas.
Puede encontrar el código aquí: [Listas enlazadas](https://github.com/jcontrolresearch/DataStructure/tree/main/Code/Chapter_1-LinkedList/Book_Example)
Lineamientos del programa:
- Hacer un menu de biblioteca: Las opciones van a ser las siguientes
1. [1] Agregar un nuevo libro: Preguntar al usuario el titulo, autor e isbn
2. [2] Listar los libros: titulo | autor | isbn / si no hay elementos, desplegar: tu biblioteca esta vacia
3. [3] Eliminar libro: Preguntar donde lo queremos eliminar. 1) Usar un submenu / 2) Pedir n y verificar si n=0 eliminar el principio, si n = long(lista) elimanr el final sino eliminan en la posicion n
4. [4] Cuantos libros tengo? : Desplegar los libros
Para su evaluacion considere entregar:
1. Codigo basado en las [rubricas de evalucion](https://github.com/jcontrolresearch/DataStructure/blob/main/Documents/Guidelines/Rubricas%20de%20evalucion.xlsx)
2. Documento de prueba de ejecucion del codigo [One-pager](https://github.com/jcontrolresearch/DataStructure/blob/main/Documents/Guidelines/Chapter_1-Linked-List/One_pager_example.pptx)
Las pruebas minimas necesarias e instrucciones del documento estan incluidos en el [one_pager](https://github.com/jcontrolresearch/DataStructure/blob/main/Documents/Guidelines/Chapter_1-Linked-List/One_pager_example.pptx)
Si tiene alguna duda no duden en contactarme.
Have fun! | non_process | listas enlazadas escriba un programa que utilice los ejemplos en clase sobre las funciones de listas enlazadas puede encontrar el código aquí lineamientos del programa hacer un menu de biblioteca las opciones van a ser las siguientes agregar un nuevo libro preguntar al usuario el titulo autor e isbn listar los libros titulo autor isbn si no hay elementos desplegar tu biblioteca esta vacia eliminar libro preguntar donde lo queremos eliminar usar un submenu pedir n y verificar si n eliminar el principio si n long lista elimanr el final sino eliminan en la posicion n cuantos libros tengo desplegar los libros para su evaluacion considere entregar codigo basado en las documento de prueba de ejecucion del codigo las pruebas minimas necesarias e instrucciones del documento estan incluidos en el si tiene alguna duda no duden en contactarme have fun | 0 |
473,284 | 13,639,691,142 | IssuesEvent | 2020-09-25 11:28:28 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.guru99.com - see bug description | browser-chrome ml-needsdiagnosis-false priority-normal | <!-- @browser: google chrome -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/58786 -->
**URL**: https://www.guru99.com/bugzilla-tutorial-for-beginners.html
**Browser / Version**: google chrome
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: Test data
**Steps to Reproduce**:
Test the defect flow
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.guru99.com - see bug description - <!-- @browser: google chrome -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/58786 -->
**URL**: https://www.guru99.com/bugzilla-tutorial-for-beginners.html
**Browser / Version**: google chrome
**Operating System**: Windows 10
**Tested Another Browser**: Yes Edge
**Problem type**: Something else
**Description**: Test data
**Steps to Reproduce**:
Test the defect flow
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | see bug description url browser version google chrome operating system windows tested another browser yes edge problem type something else description test data steps to reproduce test the defect flow browser configuration none from with ❤️ | 0 |
9,266 | 12,300,322,253 | IssuesEvent | 2020-05-11 13:49:19 | Hold-Krykke/PythonExam | https://api.github.com/repos/Hold-Krykke/PythonExam | closed | Preprocessing | Preprocessing USER STORY | * Teknologier:
* Regex
* Natural Language Toolkit (NLTK)
__TASKS__
* [x] Find og download evt. testdata mens web scraperen bliver bygget (puttes i separat folder, ala /tweets/test_tweets)
* [x] lav en funktion der kan kigge efter tweets i en specifik folder og loade dataen ind.
lavet i `app.py`.
* [x] dataen skal kigges igennem for stop-words og evt. andre forstyrrende elementer (emojis, tegnsætning osv) og det skal fjernes fra data'en. Se evt. link i readme
* ~~den tilpassede data gemmes i en ny folder (/training/hashtag)~~
* [ ] lav evt. test for at se om dataen er rengjort korrekt
**Added later**
* [x] include assosciated_hashtags (search for **#**), assosciated_persons (search for **@**), assosciated_urls (?) to be displayed at a later point
many users add subjects themselves to their tweets- can we find correlations?
example:

| 1.0 | Preprocessing - * Teknologier:
* Regex
* Natural Language Toolkit (NLTK)
__TASKS__
* [x] Find og download evt. testdata mens web scraperen bliver bygget (puttes i separat folder, ala /tweets/test_tweets)
* [x] lav en funktion der kan kigge efter tweets i en specifik folder og loade dataen ind.
lavet i `app.py`.
* [x] dataen skal kigges igennem for stop-words og evt. andre forstyrrende elementer (emojis, tegnsætning osv) og det skal fjernes fra data'en. Se evt. link i readme
* ~~den tilpassede data gemmes i en ny folder (/training/hashtag)~~
* [ ] lav evt. test for at se om dataen er rengjort korrekt
**Added later**
* [x] include assosciated_hashtags (search for **#**), assosciated_persons (search for **@**), assosciated_urls (?) to be displayed at a later point
many users add subjects themselves to their tweets- can we find correlations?
example:

| process | preprocessing teknologier regex natural language toolkit nltk tasks find og download evt testdata mens web scraperen bliver bygget puttes i separat folder ala tweets test tweets lav en funktion der kan kigge efter tweets i en specifik folder og loade dataen ind lavet i app py dataen skal kigges igennem for stop words og evt andre forstyrrende elementer emojis tegnsætning osv og det skal fjernes fra data en se evt link i readme den tilpassede data gemmes i en ny folder training hashtag lav evt test for at se om dataen er rengjort korrekt added later include assosciated hashtags search for assosciated persons search for assosciated urls to be displayed at a later point many users add subjects themselves to their tweets can we find correlations example | 1 |
7,812 | 10,964,369,770 | IssuesEvent | 2019-11-27 22:22:07 | codeuniversity/smag-mvp | https://api.github.com/repos/codeuniversity/smag-mvp | closed | Create face detection pre-filter worker | Image Processing | - read internal_picture_url and post_id from kafka topic
- detect faces and create boundries
- write post_id, internal_picture_url and boundries to face detection job | 1.0 | Create face detection pre-filter worker - - read internal_picture_url and post_id from kafka topic
- detect faces and create boundries
- write post_id, internal_picture_url and boundries to face detection job | process | create face detection pre filter worker read internal picture url and post id from kafka topic detect faces and create boundries write post id internal picture url and boundries to face detection job | 1 |
165,113 | 12,829,039,158 | IssuesEvent | 2020-07-06 21:51:32 | geoffhumphrey/brewcompetitiononlineentry | https://api.github.com/repos/geoffhumphrey/brewcompetitiononlineentry | closed | 2.1.12 - Seperating scoresheets from results | fixed in latest version | Version:
21.12
Installation URL:
Is your installation hosted on brewcompetition.com or brewcomp.com?
Description of Issue:
Is there a way to make the scoresheets available to the participants without posting winners on the front page?
We would like to post the winners on a separate site for advertising a sponsor purposes. Also, our categories are not displaying correctly (issue #974) This situation is similar to issue #694 posted back in 2016.
Thanks
For enhancements, please prepend "Enhancement - " to the title. For issues/bugs, please prepend the BCOE&M version number the title.
| 1.0 | 2.1.12 - Seperating scoresheets from results - Version:
21.12
Installation URL:
Is your installation hosted on brewcompetition.com or brewcomp.com?
Description of Issue:
Is there a way to make the scoresheets available to the participants without posting winners on the front page?
We would like to post the winners on a separate site for advertising a sponsor purposes. Also, our categories are not displaying correctly (issue #974) This situation is similar to issue #694 posted back in 2016.
Thanks
For enhancements, please prepend "Enhancement - " to the title. For issues/bugs, please prepend the BCOE&M version number the title.
| non_process | seperating scoresheets from results version installation url is your installation hosted on brewcompetition com or brewcomp com description of issue is there a way to make the scoresheets available to the participants without posting winners on the front page we would like to post the winners on a separate site for advertising a sponsor purposes also our categories are not displaying correctly issue this situation is similar to issue posted back in thanks for enhancements please prepend enhancement to the title for issues bugs please prepend the bcoe m version number the title | 0 |
97,378 | 12,230,644,722 | IssuesEvent | 2020-05-04 05:35:47 | Qiskit/qiskit.org | https://api.github.com/repos/Qiskit/qiskit.org | closed | user finds IBMQ account in header confusing | Human Design type: user story | users are confused by IBMQ account in header.
By proximity, it looks like another element of qiskit
By prominence on webpage, it confuses the relationship between qiskit and iqx | 1.0 | user finds IBMQ account in header confusing - users are confused by IBMQ account in header.
By proximity, it looks like another element of qiskit
By prominence on webpage, it confuses the relationship between qiskit and iqx | non_process | user finds ibmq account in header confusing users are confused by ibmq account in header by proximity it looks like another element of qiskit by prominence on webpage it confuses the relationship between qiskit and iqx | 0 |
554 | 3,014,355,450 | IssuesEvent | 2015-07-29 14:29:05 | DynareTeam/dynare | https://api.github.com/repos/DynareTeam/dynare | reopened | Fix translation of histval to one-lag problem | bug preprocessor | For deterministic simulations with more than one lag, the translation to a problem with one lag for use in ```sim1``` does not work.
The mod-file
```
var c k z_forward z_backward;
varexo x z_shock;
parameters alph gam delt bet aa;
alph=0.5;
gam=0.5;
delt=0.02;
bet=0.05;
aa=0.5;
model;
c + k - aa*x*k(-1)^alph - (1-delt)*k(-1); // Resource constraint
c^(-gam) - (1+bet)^(-1)*(aa*alph*x(+1)*k^(alph-1) + 1 - delt)*c(+1)^(-gam); // Euler equation
z_backward=0.1*1+0.3*z_backward(-1)+0.3*z_backward(-2)+0.3*z_backward(-3)+(x(-4)-1);
z_forward=0.1*1+0.45*z_forward(+1)+0.45*z_forward(+2)+(x(+4)-1);
end;
initval;
c = 1.2;
k = 12;
x = 1;
end;
histval;
x(-1)=1.30;
x(-2)=1.30;
end;
shocks;
var x;
periods 2;
values 0.9;
end;
simul(periods=200,maxit=100);
```
shows that the problem comes from ```M_.endo_histval```, which is set to
```
M_.endo_histval = zeros(M_.endo_nbr,M_.maximum_lag);
oo_.exo_simul( M_.maximum_lag + -2, 1 ) = 1.30;
oo_.exo_simul( M_.maximum_lag + -1, 1 ) = 1.30;
```
and thus has more than one initial period. | 1.0 | Fix translation of histval to one-lag problem - For deterministic simulations with more than one lag, the translation to a problem with one lag for use in ```sim1``` does not work.
The mod-file
```
var c k z_forward z_backward;
varexo x z_shock;
parameters alph gam delt bet aa;
alph=0.5;
gam=0.5;
delt=0.02;
bet=0.05;
aa=0.5;
model;
c + k - aa*x*k(-1)^alph - (1-delt)*k(-1); // Resource constraint
c^(-gam) - (1+bet)^(-1)*(aa*alph*x(+1)*k^(alph-1) + 1 - delt)*c(+1)^(-gam); // Euler equation
z_backward=0.1*1+0.3*z_backward(-1)+0.3*z_backward(-2)+0.3*z_backward(-3)+(x(-4)-1);
z_forward=0.1*1+0.45*z_forward(+1)+0.45*z_forward(+2)+(x(+4)-1);
end;
initval;
c = 1.2;
k = 12;
x = 1;
end;
histval;
x(-1)=1.30;
x(-2)=1.30;
end;
shocks;
var x;
periods 2;
values 0.9;
end;
simul(periods=200,maxit=100);
```
shows that the problem comes from ```M_.endo_histval```, which is set to
```
M_.endo_histval = zeros(M_.endo_nbr,M_.maximum_lag);
oo_.exo_simul( M_.maximum_lag + -2, 1 ) = 1.30;
oo_.exo_simul( M_.maximum_lag + -1, 1 ) = 1.30;
```
and thus has more than one initial period. | process | fix translation of histval to one lag problem for deterministic simulations with more than one lag the translation to a problem with one lag for use in does not work the mod file var c k z forward z backward varexo x z shock parameters alph gam delt bet aa alph gam delt bet aa model c k aa x k alph delt k resource constraint c gam bet aa alph x k alph delt c gam euler equation z backward z backward z backward z backward x z forward z forward z forward x end initval c k x end histval x x end shocks var x periods values end simul periods maxit shows that the problem comes from m endo histval which is set to m endo histval zeros m endo nbr m maximum lag oo exo simul m maximum lag oo exo simul m maximum lag and thus has more than one initial period | 1 |
6,974 | 10,122,130,037 | IssuesEvent | 2019-07-31 17:12:08 | syndesisio/syndesis | https://api.github.com/repos/syndesisio/syndesis | closed | Continuous Integration improvement: Better Elephant carpaccio | cat/process cat/research | As a result of our retrospective, we agreed to find ways to cut features in smaller end-to-end deliverables. It would allow us to integrate our software faster and recover feedback sooner. | 1.0 | Continuous Integration improvement: Better Elephant carpaccio - As a result of our retrospective, we agreed to find ways to cut features in smaller end-to-end deliverables. It would allow us to integrate our software faster and recover feedback sooner. | process | continuous integration improvement better elephant carpaccio as a result of our retrospective we agreed to find ways to cut features in smaller end to end deliverables it would allow us to integrate our software faster and recover feedback sooner | 1 |
5,138 | 7,921,847,764 | IssuesEvent | 2018-07-05 08:55:18 | emacs-ess/ESS | https://api.github.com/repos/emacs-ess/ESS | opened | Prompts are not displayed on a newline after evaluating empty string | process:eval | With cursor at an empty prompt:
```
> <cursor>
```
Hitting RET several times produces this:
```
> > > <cursor>
```
Instead of
```
>
>
> <cursor>
``` | 1.0 | Prompts are not displayed on a newline after evaluating empty string - With cursor at an empty prompt:
```
> <cursor>
```
Hitting RET several times produces this:
```
> > > <cursor>
```
Instead of
```
>
>
> <cursor>
``` | process | prompts are not displayed on a newline after evaluating empty string with cursor at an empty prompt hitting ret several times produces this instead of | 1 |
784,297 | 27,565,097,055 | IssuesEvent | 2023-03-08 02:41:45 | ubiquity/bounty-bot | https://api.github.com/repos/ubiquity/bounty-bot | closed | Set Price is Broken | Time: <1 Hour Priority: 3 (Urgent) Price: 50 USD | @0xcodercrane looks like pricing is broken
_Originally posted by @pavlovcik in https://github.com/ubiquity/bounty-bot/issues/174#issuecomment-1458705009_
| 1.0 | Set Price is Broken - @0xcodercrane looks like pricing is broken
_Originally posted by @pavlovcik in https://github.com/ubiquity/bounty-bot/issues/174#issuecomment-1458705009_
| non_process | set price is broken looks like pricing is broken originally posted by pavlovcik in | 0 |
4,995 | 2,765,514,010 | IssuesEvent | 2015-04-29 20:59:00 | sunlightlabs/the-phantom-mask | https://api.github.com/repos/sunlightlabs/the-phantom-mask | opened | As a user, I will be notified when my email does not successfully get sent to congress. | copy design | 
| 1.0 | As a user, I will be notified when my email does not successfully get sent to congress. - 
| non_process | as a user i will be notified when my email does not successfully get sent to congress | 0 |
5,939 | 8,761,487,724 | IssuesEvent | 2018-12-16 17:54:16 | jkpang/PPHub-Feedback | https://api.github.com/repos/jkpang/PPHub-Feedback | closed | URL scheme to open issues? | Feature Processing | I'm the developer of [Opener](https://www.opener.link), which supports PPHub. Does PPHub have a way of deep linking into the app to navigate to specific issues? Some of my users have requested it, and your competitor GitHawk supports it. | 1.0 | URL scheme to open issues? - I'm the developer of [Opener](https://www.opener.link), which supports PPHub. Does PPHub have a way of deep linking into the app to navigate to specific issues? Some of my users have requested it, and your competitor GitHawk supports it. | process | url scheme to open issues i m the developer of which supports pphub does pphub have a way of deep linking into the app to navigate to specific issues some of my users have requested it and your competitor githawk supports it | 1 |
16,793 | 22,038,932,935 | IssuesEvent | 2022-05-29 03:04:05 | q191201771/lal | https://api.github.com/repos/q191201771/lal | closed | rtmp推流大小分辨率超过平时常规大小宽度 HLS切片不能正常工作? | #Bug *In process *Waiting reply | 因为需要测试一些拼接后的视频推流,视频分辨率大小非常规大小,推流后 切片HLS和其它的都工作不正常
Server : NGINX RTMP (github.com/arut/nginx-rtmp-module)
displayWidth : 4096
displayHeight : 2048
fps : 0
profile :
level :
encoder : Lavf58.45.100
Stream #0:0: Video: h264 ([7][0][0][0] / 0x0007), yuv420p(progressive), 4096x2048 [SAR 1:1 DAR 2
:1], q=2-31, 1999 kb/s, 7.50 fps, 15 tbr, 1k tbn, 1k tbc
推流4096 X 2048大小后 hls切片 flv等 都不能正常工作,自动切片识别为1920*1088的视频大小,图像也无内容了 其它的flv也是
| 1.0 | rtmp推流大小分辨率超过平时常规大小宽度 HLS切片不能正常工作? - 因为需要测试一些拼接后的视频推流,视频分辨率大小非常规大小,推流后 切片HLS和其它的都工作不正常
Server : NGINX RTMP (github.com/arut/nginx-rtmp-module)
displayWidth : 4096
displayHeight : 2048
fps : 0
profile :
level :
encoder : Lavf58.45.100
Stream #0:0: Video: h264 ([7][0][0][0] / 0x0007), yuv420p(progressive), 4096x2048 [SAR 1:1 DAR 2
:1], q=2-31, 1999 kb/s, 7.50 fps, 15 tbr, 1k tbn, 1k tbc
推流4096 X 2048大小后 hls切片 flv等 都不能正常工作,自动切片识别为1920*1088的视频大小,图像也无内容了 其它的flv也是
| process | rtmp推流大小分辨率超过平时常规大小宽度 hls切片不能正常工作? 因为需要测试一些拼接后的视频推流,视频分辨率大小非常规大小,推流后 切片hls和其它的都工作不正常 server nginx rtmp github com arut nginx rtmp module displaywidth displayheight fps profile level encoder stream video progressive sar dar q kb s fps tbr tbn tbc x hls切片 flv等 都不能正常工作, ,图像也无内容了 其它的flv也是 | 1 |
143,257 | 11,531,422,062 | IssuesEvent | 2020-02-17 00:36:39 | kubernetes/test-infra | https://api.github.com/repos/kubernetes/test-infra | closed | ability to subscribe to test grid board summary's failures | area/testgrid kind/feature lifecycle/rotten sig/testing | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
As a community member doing triage I regularly look at testgrid summary boards and visually scan for new failures. I would like an email regarding state changes.
**Why is this needed**:
1) notifications are good
2) It is possible to get notifications for board tabs and for individual tests and this is commonly done. But for example as a member of the patch release team, I'm watching all of sig-release-1.1.[345]-blocking|all|informing for failures. Adding our team email to all of those tabs is going to be something like 300 lines of config patch, when it could be 9. This is not only awkward and a large barrier to folks choosing to proactively monitor more, it is highly redundant and not highly maintainable. | 2.0 | ability to subscribe to test grid board summary's failures - <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
As a community member doing triage I regularly look at testgrid summary boards and visually scan for new failures. I would like an email regarding state changes.
**Why is this needed**:
1) notifications are good
2) It is possible to get notifications for board tabs and for individual tests and this is commonly done. But for example as a member of the patch release team, I'm watching all of sig-release-1.1.[345]-blocking|all|informing for failures. Adding our team email to all of those tabs is going to be something like 300 lines of config patch, when it could be 9. This is not only awkward and a large barrier to folks choosing to proactively monitor more, it is highly redundant and not highly maintainable. | non_process | ability to subscribe to test grid board summary s failures what would you like to be added as a community member doing triage i regularly look at testgrid summary boards and visually scan for new failures i would like an email regarding state changes why is this needed notifications are good it is possible to get notifications for board tabs and for individual tests and this is commonly done but for example as a member of the patch release team i m watching all of sig release blocking all informing for failures adding our team email to all of those tabs is going to be something like lines of config patch when it could be this is not only awkward and a large barrier to folks choosing to proactively monitor more it is highly redundant and not highly maintainable | 0 |
22,264 | 30,817,346,428 | IssuesEvent | 2023-08-01 14:14:29 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Regressions in System.Diagnostics.Perf_Process on 2/17/2023 | area-System.Diagnostics.Process tenet-performance tenet-performance-benchmarks |
### Run Information
Architecture | x64
-- | --
OS | Windows 10.0.19042
Baseline | [fc3303de655f69ede541c795ce054c5673d45b49](https://github.com/dotnet/runtime/commit/fc3303de655f69ede541c795ce054c5673d45b49)
Compare | [f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e](https://github.com/dotnet/runtime/commit/f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e)
Diff | [Diff](https://github.com/dotnet/runtime/compare/fc3303de655f69ede541c795ce054c5673d45b49...f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e)
### Regressions in Benchstone.BenchI.IniArray
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[Test - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_x64_Windows 10.0.19042/amd/Benchstone.BenchI.IniArray.Test.html>) | 51.09 ms | 58.21 ms | 1.14 | 0.28 | False | | |

[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_21_2023/refs/heads/main_x64_Windows%2010.0.19042/amd_Regression/Benchstone.BenchI.IniArray.html>)
### Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-8e5507e4-5f5a-4fc4-8e03-b4a638251a08e574e99e9b44195a5/f21050c7-649a-447a-89b6-87aaa8de18f8.zip?sv=2021-08-06&se=2023-03-19T12%3A03%3A01Z&sr=c&sp=rl&sig=u4VjBOhDfYi6G89fpanOiPYAR2HJttZtQxNt9oGjJYM%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-5b3e2158-f3f8-4a47-b1b2-6d79904e1a12c2f0d364aad4be3b5/4b10263c-5b1f-44da-89b2-e5381cd64868.zip?sv=2021-08-06&se=2023-03-19T23%3A35%3A35Z&sr=c&sp=rl&sig=2aCF%2FNqcImRqeJIxXo15QOsZ1iP%2BOsk1KvmOeS72C9s%3D>)
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'Benchstone.BenchI.IniArray*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-8e5507e4-5f5a-4fc4-8e03-b4a638251a08e574e99e9b44195a5/f21050c7-649a-447a-89b6-87aaa8de18f8.zip?sv=2021-08-06&se=2023-03-19T12%3A03%3A01Z&sr=c&sp=rl&sig=u4VjBOhDfYi6G89fpanOiPYAR2HJttZtQxNt9oGjJYM%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-5b3e2158-f3f8-4a47-b1b2-6d79904e1a12c2f0d364aad4be3b5/4b10263c-5b1f-44da-89b2-e5381cd64868.zip?sv=2021-08-06&se=2023-03-19T23%3A35%3A35Z&sr=c&sp=rl&sig=2aCF%2FNqcImRqeJIxXo15QOsZ1iP%2BOsk1KvmOeS72C9s%3D>)
### Histogram
#### Benchstone.BenchI.IniArray.Test
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 58.21131625 > 54.607623874999994.
IsChangePoint: Marked as a change because one of 12/23/2022 5:45:27 PM, 1/2/2023 2:49:36 PM, 2/9/2023 10:24:28 PM, 2/17/2023 12:28:21 PM, 2/21/2023 5:46:41 AM falls between 2/12/2023 2:35:10 PM and 2/21/2023 5:46:41 AM.
IsRegressionStdDev: Marked as regression because -7.009427384470399 (T) = (0 -58366382.00066138) / Math.Sqrt((4106402573871.0205 / (44)) + (8588057898295.025 / (18))) is less than -2.0002978220134566 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (44) + (18) - 2, .025) and -0.09975162952496873 = (53072330.54600919 - 58366382.00066138) / 53072330.54600919 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
### Run Information
Architecture | x64
-- | --
OS | Windows 10.0.19042
Baseline | [fc3303de655f69ede541c795ce054c5673d45b49](https://github.com/dotnet/runtime/commit/fc3303de655f69ede541c795ce054c5673d45b49)
Compare | [f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e](https://github.com/dotnet/runtime/commit/f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e)
Diff | [Diff](https://github.com/dotnet/runtime/compare/fc3303de655f69ede541c795ce054c5673d45b49...f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e)
### Regressions in System.Diagnostics.Perf_Process
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[StartAndWaitForExit - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_x64_Windows 10.0.19042/amd/System.Diagnostics.Perf_Process.StartAndWaitForExit.html>) | 7.05 ms | 8.37 ms | 1.19 | 0.14 | False | | |
[Start - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_x64_Windows 10.0.19042/amd/System.Diagnostics.Perf_Process.Start.html>) | 1.29 ms | 2.31 ms | 1.80 | 0.35 | False | | |


[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_21_2023/refs/heads/main_x64_Windows%2010.0.19042/amd_Regression/System.Diagnostics.Perf_Process.html>)
### Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-8e5507e4-5f5a-4fc4-8e03-b4a638251a08e574e99e9b44195a5/f21050c7-649a-447a-89b6-87aaa8de18f8.zip?sv=2021-08-06&se=2023-03-19T12%3A03%3A01Z&sr=c&sp=rl&sig=u4VjBOhDfYi6G89fpanOiPYAR2HJttZtQxNt9oGjJYM%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-5b3e2158-f3f8-4a47-b1b2-6d79904e1a12c2f0d364aad4be3b5/4b10263c-5b1f-44da-89b2-e5381cd64868.zip?sv=2021-08-06&se=2023-03-19T23%3A35%3A35Z&sr=c&sp=rl&sig=2aCF%2FNqcImRqeJIxXo15QOsZ1iP%2BOsk1KvmOeS72C9s%3D>)
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Diagnostics.Perf_Process*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-8e5507e4-5f5a-4fc4-8e03-b4a638251a08e574e99e9b44195a5/f21050c7-649a-447a-89b6-87aaa8de18f8.zip?sv=2021-08-06&se=2023-03-19T12%3A03%3A01Z&sr=c&sp=rl&sig=u4VjBOhDfYi6G89fpanOiPYAR2HJttZtQxNt9oGjJYM%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-5b3e2158-f3f8-4a47-b1b2-6d79904e1a12c2f0d364aad4be3b5/4b10263c-5b1f-44da-89b2-e5381cd64868.zip?sv=2021-08-06&se=2023-03-19T23%3A35%3A35Z&sr=c&sp=rl&sig=2aCF%2FNqcImRqeJIxXo15QOsZ1iP%2BOsk1KvmOeS72C9s%3D>)
### Histogram
#### System.Diagnostics.Perf_Process.StartAndWaitForExit
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 8.36881527093596 > 7.384021116515837.
IsChangePoint: Marked as a change because one of 2/17/2023 12:28:21 PM, 2/21/2023 5:46:41 AM falls between 2/12/2023 2:35:10 PM and 2/21/2023 5:46:41 AM.
IsRegressionStdDev: Marked as regression because -15.88219952911553 (T) = (0 -8068198.678623415) / Math.Sqrt((69537992047.81537 / (44)) + (31150128383.654545 / (18))) is less than -2.0002978220134566 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (44) + (18) - 2, .025) and -0.12773796477077226 = (7154320.3569132155 - 8068198.678623415) / 7154320.3569132155 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Diagnostics.Perf_Process.Start
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 2.310655 > 1.379377125.
IsChangePoint: Marked as a change because one of 2/15/2023 2:23:18 PM, 2/17/2023 12:28:21 PM, 2/21/2023 5:46:41 AM falls between 2/12/2023 2:35:10 PM and 2/21/2023 5:46:41 AM.
IsRegressionStdDev: Marked as regression because -17.138199428531344 (T) = (0 -2276359.254385965) / Math.Sqrt((96621908561.64552 / (44)) + (2638738374.362474 / (18))) is less than -2.0002978220134566 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (44) + (18) - 2, .025) and -0.573296543558203 = (1446872.3418394472 - 2276359.254385965) / 1446872.3418394472 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
| 1.0 | Regressions in System.Diagnostics.Perf_Process on 2/17/2023 -
### Run Information
Architecture | x64
-- | --
OS | Windows 10.0.19042
Baseline | [fc3303de655f69ede541c795ce054c5673d45b49](https://github.com/dotnet/runtime/commit/fc3303de655f69ede541c795ce054c5673d45b49)
Compare | [f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e](https://github.com/dotnet/runtime/commit/f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e)
Diff | [Diff](https://github.com/dotnet/runtime/compare/fc3303de655f69ede541c795ce054c5673d45b49...f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e)
### Regressions in Benchstone.BenchI.IniArray
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[Test - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_x64_Windows 10.0.19042/amd/Benchstone.BenchI.IniArray.Test.html>) | 51.09 ms | 58.21 ms | 1.14 | 0.28 | False | | |

[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_21_2023/refs/heads/main_x64_Windows%2010.0.19042/amd_Regression/Benchstone.BenchI.IniArray.html>)
### Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-8e5507e4-5f5a-4fc4-8e03-b4a638251a08e574e99e9b44195a5/f21050c7-649a-447a-89b6-87aaa8de18f8.zip?sv=2021-08-06&se=2023-03-19T12%3A03%3A01Z&sr=c&sp=rl&sig=u4VjBOhDfYi6G89fpanOiPYAR2HJttZtQxNt9oGjJYM%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-5b3e2158-f3f8-4a47-b1b2-6d79904e1a12c2f0d364aad4be3b5/4b10263c-5b1f-44da-89b2-e5381cd64868.zip?sv=2021-08-06&se=2023-03-19T23%3A35%3A35Z&sr=c&sp=rl&sig=2aCF%2FNqcImRqeJIxXo15QOsZ1iP%2BOsk1KvmOeS72C9s%3D>)
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'Benchstone.BenchI.IniArray*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-8e5507e4-5f5a-4fc4-8e03-b4a638251a08e574e99e9b44195a5/f21050c7-649a-447a-89b6-87aaa8de18f8.zip?sv=2021-08-06&se=2023-03-19T12%3A03%3A01Z&sr=c&sp=rl&sig=u4VjBOhDfYi6G89fpanOiPYAR2HJttZtQxNt9oGjJYM%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-5b3e2158-f3f8-4a47-b1b2-6d79904e1a12c2f0d364aad4be3b5/4b10263c-5b1f-44da-89b2-e5381cd64868.zip?sv=2021-08-06&se=2023-03-19T23%3A35%3A35Z&sr=c&sp=rl&sig=2aCF%2FNqcImRqeJIxXo15QOsZ1iP%2BOsk1KvmOeS72C9s%3D>)
### Histogram
#### Benchstone.BenchI.IniArray.Test
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 58.21131625 > 54.607623874999994.
IsChangePoint: Marked as a change because one of 12/23/2022 5:45:27 PM, 1/2/2023 2:49:36 PM, 2/9/2023 10:24:28 PM, 2/17/2023 12:28:21 PM, 2/21/2023 5:46:41 AM falls between 2/12/2023 2:35:10 PM and 2/21/2023 5:46:41 AM.
IsRegressionStdDev: Marked as regression because -7.009427384470399 (T) = (0 -58366382.00066138) / Math.Sqrt((4106402573871.0205 / (44)) + (8588057898295.025 / (18))) is less than -2.0002978220134566 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (44) + (18) - 2, .025) and -0.09975162952496873 = (53072330.54600919 - 58366382.00066138) / 53072330.54600919 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
### Run Information
Architecture | x64
-- | --
OS | Windows 10.0.19042
Baseline | [fc3303de655f69ede541c795ce054c5673d45b49](https://github.com/dotnet/runtime/commit/fc3303de655f69ede541c795ce054c5673d45b49)
Compare | [f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e](https://github.com/dotnet/runtime/commit/f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e)
Diff | [Diff](https://github.com/dotnet/runtime/compare/fc3303de655f69ede541c795ce054c5673d45b49...f8e6ebfd87da7d8947a8cf550cff9ec51f32cd1e)
### Regressions in System.Diagnostics.Perf_Process
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[StartAndWaitForExit - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_x64_Windows 10.0.19042/amd/System.Diagnostics.Perf_Process.StartAndWaitForExit.html>) | 7.05 ms | 8.37 ms | 1.19 | 0.14 | False | | |
[Start - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_x64_Windows 10.0.19042/amd/System.Diagnostics.Perf_Process.Start.html>) | 1.29 ms | 2.31 ms | 1.80 | 0.35 | False | | |


[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/02_21_2023/refs/heads/main_x64_Windows%2010.0.19042/amd_Regression/System.Diagnostics.Perf_Process.html>)
### Repro
General Docs link: https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow-dotnet-runtime.md
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-8e5507e4-5f5a-4fc4-8e03-b4a638251a08e574e99e9b44195a5/f21050c7-649a-447a-89b6-87aaa8de18f8.zip?sv=2021-08-06&se=2023-03-19T12%3A03%3A01Z&sr=c&sp=rl&sig=u4VjBOhDfYi6G89fpanOiPYAR2HJttZtQxNt9oGjJYM%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-5b3e2158-f3f8-4a47-b1b2-6d79904e1a12c2f0d364aad4be3b5/4b10263c-5b1f-44da-89b2-e5381cd64868.zip?sv=2021-08-06&se=2023-03-19T23%3A35%3A35Z&sr=c&sp=rl&sig=2aCF%2FNqcImRqeJIxXo15QOsZ1iP%2BOsk1KvmOeS72C9s%3D>)
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net8.0 --filter 'System.Diagnostics.Perf_Process*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-8e5507e4-5f5a-4fc4-8e03-b4a638251a08e574e99e9b44195a5/f21050c7-649a-447a-89b6-87aaa8de18f8.zip?sv=2021-08-06&se=2023-03-19T12%3A03%3A01Z&sr=c&sp=rl&sig=u4VjBOhDfYi6G89fpanOiPYAR2HJttZtQxNt9oGjJYM%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-5b3e2158-f3f8-4a47-b1b2-6d79904e1a12c2f0d364aad4be3b5/4b10263c-5b1f-44da-89b2-e5381cd64868.zip?sv=2021-08-06&se=2023-03-19T23%3A35%3A35Z&sr=c&sp=rl&sig=2aCF%2FNqcImRqeJIxXo15QOsZ1iP%2BOsk1KvmOeS72C9s%3D>)
### Histogram
#### System.Diagnostics.Perf_Process.StartAndWaitForExit
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 8.36881527093596 > 7.384021116515837.
IsChangePoint: Marked as a change because one of 2/17/2023 12:28:21 PM, 2/21/2023 5:46:41 AM falls between 2/12/2023 2:35:10 PM and 2/21/2023 5:46:41 AM.
IsRegressionStdDev: Marked as regression because -15.88219952911553 (T) = (0 -8068198.678623415) / Math.Sqrt((69537992047.81537 / (44)) + (31150128383.654545 / (18))) is less than -2.0002978220134566 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (44) + (18) - 2, .025) and -0.12773796477077226 = (7154320.3569132155 - 8068198.678623415) / 7154320.3569132155 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Diagnostics.Perf_Process.Start
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 2.310655 > 1.379377125.
IsChangePoint: Marked as a change because one of 2/15/2023 2:23:18 PM, 2/17/2023 12:28:21 PM, 2/21/2023 5:46:41 AM falls between 2/12/2023 2:35:10 PM and 2/21/2023 5:46:41 AM.
IsRegressionStdDev: Marked as regression because -17.138199428531344 (T) = (0 -2276359.254385965) / Math.Sqrt((96621908561.64552 / (44)) + (2638738374.362474 / (18))) is less than -2.0002978220134566 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (44) + (18) - 2, .025) and -0.573296543558203 = (1446872.3418394472 - 2276359.254385965) / 1446872.3418394472 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
| process | regressions in system diagnostics perf process on run information architecture os windows baseline compare diff regressions in benchstone benchi iniarray benchmark baseline test test base test quality edge detector baseline ir compare ir ir ratio baseline etl compare etl ms ms false repro general docs link payloads cmd git clone py performance scripts benchmarks ci py f filter benchstone benchi iniarray payloads histogram benchstone benchi iniarray test log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm pm pm pm am falls between pm and am isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so docs run information architecture os windows baseline compare diff regressions in system diagnostics perf process benchmark baseline test test base test quality edge detector baseline ir compare ir ir ratio baseline etl compare etl ms ms false ms ms false repro general docs link payloads cmd git clone py performance scripts benchmarks ci py f filter system diagnostics perf process payloads histogram system diagnostics perf process startandwaitforexit log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm am falls between pm and am isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so system diagnostics perf process start log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm pm am falls between pm and am isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so docs | 1 |
380,067 | 11,253,638,652 | IssuesEvent | 2020-01-11 17:38:45 | SkriptLang/Skript | https://api.github.com/repos/SkriptLang/Skript | closed | Multiplication in List Index | bug completed priority: low | <!--- This is just a recommended template, but you should follow it --->
### Description
<!--- A clear and concise description of what the problem is --->
Due to the rule of ```a variable's name must not contain any asterisks except at the end after '::' to denote a list variable```, you are unable to directly use multiplication in a list's index
### Steps to Reproduce
<!--- Steps to reproduce the problem. If applicable, add a script or code snippet here --->
```
function test():
set {_test::%2*3%} to "Example"
```
### Expected Behavior
<!--- A clear and concise description of what you expected to happen --->
For the above Skript to successfully parse
### Errors / Screenshots
<!--- If applicable, add errors and screenshots to help explain your problem --->

<!---
If you get a console error, you should send the full error. Also if the error contains a stack trace; you can remove the Server Information section below.
Also you should send long errors using a permanent and reliable paste service like Gist, not hastebin.
--->
### Server Information
* **Server version/platform:** Spigot 1.14.4
* **Skript version:** Skript 2.4-beta5
### Additional Context
<!--- Add any other context about the problem here --->
This issue is quite annoying whenever you need to do multiplication with a list's indices, and although it's possible to mitigate using intermediate temporary variables, it doesn't seem like too difficult of a bug to fix (just make the asterisk check skip content within %%)? | 1.0 | Multiplication in List Index - <!--- This is just a recommended template, but you should follow it --->
### Description
<!--- A clear and concise description of what the problem is --->
Due to the rule of ```a variable's name must not contain any asterisks except at the end after '::' to denote a list variable```, you are unable to directly use multiplication in a list's index
### Steps to Reproduce
<!--- Steps to reproduce the problem. If applicable, add a script or code snippet here --->
```
function test():
set {_test::%2*3%} to "Example"
```
### Expected Behavior
<!--- A clear and concise description of what you expected to happen --->
For the above Skript to successfully parse
### Errors / Screenshots
<!--- If applicable, add errors and screenshots to help explain your problem --->

<!---
If you get a console error, you should send the full error. Also if the error contains a stack trace; you can remove the Server Information section below.
Also you should send long errors using a permanent and reliable paste service like Gist, not hastebin.
--->
### Server Information
* **Server version/platform:** Spigot 1.14.4
* **Skript version:** Skript 2.4-beta5
### Additional Context
<!--- Add any other context about the problem here --->
This issue is quite annoying whenever you need to do multiplication with a list's indices, and although it's possible to mitigate using intermediate temporary variables, it doesn't seem like too difficult of a bug to fix (just make the asterisk check skip content within %%)? | non_process | multiplication in list index description due to the rule of a variable s name must not contain any asterisks except at the end after to denote a list variable you are unable to directly use multiplication in a list s index steps to reproduce function test set test to example expected behavior for the above skript to successfully parse errors screenshots if you get a console error you should send the full error also if the error contains a stack trace you can remove the server information section below also you should send long errors using a permanent and reliable paste service like gist not hastebin server information server version platform spigot skript version skript additional context this issue is quite annoying whenever you need to do multiplication with a list s indices and although it s possible to mitigate using intermediate temporary variables it doesn t seem like too difficult of a bug to fix just make the asterisk check skip content within | 0 |
10,855 | 8,759,522,709 | IssuesEvent | 2018-12-15 17:07:02 | UCSC-MedBook/MedBook | https://api.github.com/repos/UCSC-MedBook/MedBook | closed | Improve backup procedure -- rsync first, tar after | infrastructure ready | Consider & implement per Rob's comment on #8 :
> To keep the load down on production and reduce network I/O I propose we rsync everything to backup before tar - that way only incremental blob store is backed up. Then on backup.medbook.io we could periodically tar up complete backup. Most of the time the rsync will only be coping changed files and the mongo dump (and 'might' send a diff of it, not sure)
| 1.0 | Improve backup procedure -- rsync first, tar after - Consider & implement per Rob's comment on #8 :
> To keep the load down on production and reduce network I/O I propose we rsync everything to backup before tar - that way only incremental blob store is backed up. Then on backup.medbook.io we could periodically tar up complete backup. Most of the time the rsync will only be coping changed files and the mongo dump (and 'might' send a diff of it, not sure)
| non_process | improve backup procedure rsync first tar after consider implement per rob s comment on to keep the load down on production and reduce network i o i propose we rsync everything to backup before tar that way only incremental blob store is backed up then on backup medbook io we could periodically tar up complete backup most of the time the rsync will only be coping changed files and the mongo dump and might send a diff of it not sure | 0 |
174,473 | 27,644,778,794 | IssuesEvent | 2023-03-10 21:40:13 | saving-satoshi/saving-satoshi | https://api.github.com/repos/saving-satoshi/saving-satoshi | closed | Add a second page to the chapter 1 intro | copy design | This page goes after [this one](https://savingsatoshi.com/en/chapters/chapter-1) and then leads into the first challenge. The copy in Figma ([here](https://www.figma.com/file/LqjK3Tpvd9KJ4buFArCJBQ/Saving-Satoshi?node-id=2374%3A25234&t=1WVTKCVc1kPVBDfc-1)) should be up-to-date (just checked it). An export-ready image can be found [here](https://www.figma.com/file/LqjK3Tpvd9KJ4buFArCJBQ/Saving-Satoshi?node-id=2323%3A25490&t=1WVTKCVc1kPVBDfc-1) (select the image layer and press export in the bottom of the right sidebar).

| 1.0 | Add a second page to the chapter 1 intro - This page goes after [this one](https://savingsatoshi.com/en/chapters/chapter-1) and then leads into the first challenge. The copy in Figma ([here](https://www.figma.com/file/LqjK3Tpvd9KJ4buFArCJBQ/Saving-Satoshi?node-id=2374%3A25234&t=1WVTKCVc1kPVBDfc-1)) should be up-to-date (just checked it). An export-ready image can be found [here](https://www.figma.com/file/LqjK3Tpvd9KJ4buFArCJBQ/Saving-Satoshi?node-id=2323%3A25490&t=1WVTKCVc1kPVBDfc-1) (select the image layer and press export in the bottom of the right sidebar).

| non_process | add a second page to the chapter intro this page goes after and then leads into the first challenge the copy in figma should be up to date just checked it an export ready image can be found select the image layer and press export in the bottom of the right sidebar | 0 |
12,713 | 15,085,146,861 | IssuesEvent | 2021-02-05 18:12:46 | w3c/rdf-star | https://api.github.com/repos/w3c/rdf-star | closed | Rename "RDF*" - avoiding regular expression wildcard in name | process | Copied from https://github.com/w3c/EasierRDF/issues/76
> The name "RDF*" has the negative property that it is not just a string, but also a regular expression. Some search engines make it difficult or impossible to search for "RDF*" without interpreting the name as a regular expression.
>
> Some suggestions:
>
> * RDF star
> * RDFx
@hartig commented in https://github.com/w3c/EasierRDF/issues/76#issuecomment-708232084
> I don't think it is a good idea to rename RDF* at this point. However, I understand the issue related to search engines. A possible way to address this issue is to include keywords such as "RDF star", "RDFstar", "SPARQL star", etc, in the metadata sections of the documents about the approach; this way, search engine can pick up these keywords and, then, searches for these keywords will end up showing the right hits.
I think it is not to late to rename the specification by replacing "RDF*" with "RDF star". Also:
* Quite a few articles about `RDF*` already write `RDF star` instead
* This repository is named `rdf-star`
* html metadata keywords are almost never provided on the web today and it is questionable that search engines use them
* html metadata keywords can not be provided for PDF files etc.
| 1.0 | Rename "RDF*" - avoiding regular expression wildcard in name - Copied from https://github.com/w3c/EasierRDF/issues/76
> The name "RDF*" has the negative property that it is not just a string, but also a regular expression. Some search engines make it difficult or impossible to search for "RDF*" without interpreting the name as a regular expression.
>
> Some suggestions:
>
> * RDF star
> * RDFx
@hartig commented in https://github.com/w3c/EasierRDF/issues/76#issuecomment-708232084
> I don't think it is a good idea to rename RDF* at this point. However, I understand the issue related to search engines. A possible way to address this issue is to include keywords such as "RDF star", "RDFstar", "SPARQL star", etc, in the metadata sections of the documents about the approach; this way, search engine can pick up these keywords and, then, searches for these keywords will end up showing the right hits.
I think it is not to late to rename the specification by replacing "RDF*" with "RDF star". Also:
* Quite a few articles about `RDF*` already write `RDF star` instead
* This repository is named `rdf-star`
* html metadata keywords are almost never provided on the web today and it is questionable that search engines use them
* html metadata keywords can not be provided for PDF files etc.
| process | rename rdf avoiding regular expression wildcard in name copied from the name rdf has the negative property that it is not just a string but also a regular expression some search engines make it difficult or impossible to search for rdf without interpreting the name as a regular expression some suggestions rdf star rdfx hartig commented in i don t think it is a good idea to rename rdf at this point however i understand the issue related to search engines a possible way to address this issue is to include keywords such as rdf star rdfstar sparql star etc in the metadata sections of the documents about the approach this way search engine can pick up these keywords and then searches for these keywords will end up showing the right hits i think it is not to late to rename the specification by replacing rdf with rdf star also quite a few articles about rdf already write rdf star instead this repository is named rdf star html metadata keywords are almost never provided on the web today and it is questionable that search engines use them html metadata keywords can not be provided for pdf files etc | 1 |
10,600 | 13,427,227,671 | IssuesEvent | 2020-09-06 17:13:11 | bisq-network/bisq | https://api.github.com/repos/bisq-network/bisq | closed | Trading with Revolut between v1.3.7 and v1.3.8 clients will fail | a:bug in:trade-process re:testing | ### Description
As we are requiring now a username to trade Revolut based on feedback from Mediators it won't be possible to trade Revolut between v1.3.7 and v1.3.8 clients. This is caused by a breach of contract.
#### Version
v1.3.8
### Steps to reproduce
Use old client (v1.3.7) with Revolut account using phone number and new client (v1.3.8) using a username.
### Screenshots

This might cause confusion during the transition phase.
We could either force everyone to update to v1.3.8 or mention it in the update message so people are aware of this problem.
The problem with just mentioning is, that it will affect clients updating to v1.3.8 and clients who stay on v1.3.8 causing this error when trying to take an incompatible offer.
If we don't want to do a force update maybe we should filter out at least v1.3.7 Revolut offers for v1.3.8 clients. | 1.0 | Trading with Revolut between v1.3.7 and v1.3.8 clients will fail - ### Description
As we are requiring now a username to trade Revolut based on feedback from Mediators it won't be possible to trade Revolut between v1.3.7 and v1.3.8 clients. This is caused by a breach of contract.
#### Version
v1.3.8
### Steps to reproduce
Use old client (v1.3.7) with Revolut account using phone number and new client (v1.3.8) using a username.
### Screenshots

This might cause confusion during the transition phase.
We could either force everyone to update to v1.3.8 or mention it in the update message so people are aware of this problem.
The problem with just mentioning is, that it will affect clients updating to v1.3.8 and clients who stay on v1.3.8 causing this error when trying to take an incompatible offer.
If we don't want to do a force update maybe we should filter out at least v1.3.7 Revolut offers for v1.3.8 clients. | process | trading with revolut between and clients will fail description as we are requiring now a username to trade revolut based on feedback from mediators it won t be possible to trade revolut between and clients this is caused by a breach of contract version steps to reproduce use old client with revolut account using phone number and new client using a username screenshots this might cause confusion during the transition phase we could either force everyone to update to or mention it in the update message so people are aware of this problem the problem with just mentioning is that it will affect clients updating to and clients who stay on causing this error when trying to take an incompatible offer if we don t want to do a force update maybe we should filter out at least revolut offers for clients | 1 |
4,452 | 7,319,442,556 | IssuesEvent | 2018-03-02 00:51:09 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Privacy Statement URL removal | active-directory cxp in-process triaged | Is it possible to remove 'Privacy Statement URL' for an application on ' https://apps.dev.microsoft.com/' so that it is not shown to user when her consent is requested?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 18882e74-1275-a04f-cd7a-9fa170e6b853
* Version Independent ID: decf2e12-59ba-2707-53cb-adb72a781987
* [Content](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-app-registration)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/develop/active-directory-v2-app-registration.md)
* Service: active-directory | 1.0 | Privacy Statement URL removal - Is it possible to remove 'Privacy Statement URL' for an application on ' https://apps.dev.microsoft.com/' so that it is not shown to user when her consent is requested?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 18882e74-1275-a04f-cd7a-9fa170e6b853
* Version Independent ID: decf2e12-59ba-2707-53cb-adb72a781987
* [Content](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-app-registration)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/active-directory/develop/active-directory-v2-app-registration.md)
* Service: active-directory | process | privacy statement url removal is it possible to remove privacy statement url for an application on so that it is not shown to user when her consent is requested document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id service active directory | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.