Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
47,348
13,056,133,993
IssuesEvent
2020-07-30 03:45:41
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
GLShovel does not correctly show pulses/launches/i3particle if the viewing timewindow is not in range (Trac #390)
Migrated from Trac defect glshovel
GLShovel has problems with long hitseries on a any DOM, when the viewing time excludes the first hit. If for example a hitseries has two hits, 1 at 10 microseconds and one at 100 microseconds, and we put the t_min to 50 and t_max to 200, no hit will be rendered for that DOM. The same is true for domlaunches and I3Particles. For example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event, you have to back in time often quite far to be able to see the track. Migrated from https://code.icecube.wisc.edu/ticket/390 ```json { "status": "closed", "changetime": "2013-11-21T22:34:18", "description": "GLShovel has problems with long hitseries on a any DOM, when the viewing time excludes the first hit.\n\nIf for example a hitseries has two hits, 1 at 10 microseconds and one\nat 100 microseconds, and we put the t_min to 50 and t_max to 200,\nno hit will be rendered for that DOM.\n\n\nThe same is true for domlaunches and I3Particles. For example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event, you have to back in time often quite far to be able to see the track.\n\n", "reporter": "gluesenkamp", "cc": "", "resolution": "fixed", "_ts": "1385073258000000", "component": "glshovel", "summary": "GLShovel does not correctly show pulses/launches/i3particle if the viewing timewindow is not in range", "priority": "normal", "keywords": "", "time": "2012-04-26T15:11:55", "milestone": "", "owner": "olivas", "type": "defect" } ```
1.0
GLShovel does not correctly show pulses/launches/i3particle if the viewing timewindow is not in range (Trac #390) - GLShovel has problems with long hitseries on a any DOM, when the viewing time excludes the first hit. If for example a hitseries has two hits, 1 at 10 microseconds and one at 100 microseconds, and we put the t_min to 50 and t_max to 200, no hit will be rendered for that DOM. The same is true for domlaunches and I3Particles. For example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event, you have to back in time often quite far to be able to see the track. Migrated from https://code.icecube.wisc.edu/ticket/390 ```json { "status": "closed", "changetime": "2013-11-21T22:34:18", "description": "GLShovel has problems with long hitseries on a any DOM, when the viewing time excludes the first hit.\n\nIf for example a hitseries has two hits, 1 at 10 microseconds and one\nat 100 microseconds, and we put the t_min to 50 and t_max to 200,\nno hit will be rendered for that DOM.\n\n\nThe same is true for domlaunches and I3Particles. For example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event, you have to back in time often quite far to be able to see the track.\n\n", "reporter": "gluesenkamp", "cc": "", "resolution": "fixed", "_ts": "1385073258000000", "component": "glshovel", "summary": "GLShovel does not correctly show pulses/launches/i3particle if the viewing timewindow is not in range", "priority": "normal", "keywords": "", "time": "2012-04-26T15:11:55", "milestone": "", "owner": "olivas", "type": "defect" } ```
non_process
glshovel does not correctly show pulses launches if the viewing timewindow is not in range trac glshovel has problems with long hitseries on a any dom when the viewing time excludes the first hit if for example a hitseries has two hits at microseconds and one at microseconds and we put the t min to and t max to no hit will be rendered for that dom the same is true for domlaunches and for example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event you have to back in time often quite far to be able to see the track migrated from json status closed changetime description glshovel has problems with long hitseries on a any dom when the viewing time excludes the first hit n nif for example a hitseries has two hits at microseconds and one nat microseconds and we put the t min to and t max to nno hit will be rendered for that dom n n nthe same is true for domlaunches and for example for neutrino datasets that come through earth where the neutrino is much earlier than the actual triggered event you have to back in time often quite far to be able to see the track n n reporter gluesenkamp cc resolution fixed ts component glshovel summary glshovel does not correctly show pulses launches if the viewing timewindow is not in range priority normal keywords time milestone owner olivas type defect
0
9,672
12,677,536,957
IssuesEvent
2020-06-19 07:58:09
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
The wrong processing of destructuring
AREA: client AREA: server FREQUENCY: level 2 SYSTEM: script processing TYPE: bug
If `location` declared inside function or passed as function parameter we should not process it. It needs for fix https://github.com/DevExpress/testcafe-hammerhead/issues/640#issuecomment-228679853 **Update!** The origin issue was transfered to https://github.com/DevExpress/testcafe-hammerhead/issues/2283
1.0
The wrong processing of destructuring - If `location` declared inside function or passed as function parameter we should not process it. It needs for fix https://github.com/DevExpress/testcafe-hammerhead/issues/640#issuecomment-228679853 **Update!** The origin issue was transfered to https://github.com/DevExpress/testcafe-hammerhead/issues/2283
process
the wrong processing of destructuring if location declared inside function or passed as function parameter we should not process it it needs for fix update the origin issue was transfered to
1
188,834
14,476,789,265
IssuesEvent
2020-12-10 05:03:43
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
Manual test run on Linux for 1.18.x
OS/Desktop OS/Linux QA/Yes release-notes/exclude tests
### Installer - [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave-Browser-Beta.app/` and make sure it returns `accepted`. If Windows right click on the `brave_installer-x64.exe` and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window ### Widevine - [x] Verify `Widevine Notification` is shown when you visit Netflix for the first time - [x] Test that you can stream on Netflix on a fresh profile after installing Widevine ### Rewards - [x] Verify account balance shows correct BAT and USD value - [x] Verify you are able to restore a wallet - [x] Verify actions taken (claiming grant, tipping, auto-contribute) display in wallet panel - [x] Verify when you click on the BR panel while on a site, the panel displays site specific information (site favicon, domain, attention %) - [x] Verify you are able to make one-time tip and they display in tips panel - [x] Verify you are able to make recurring tip and they display in tips panel - [x] Verify you can tip a verified publisher - [x] Verify you can tip a verified YouTube creator - [x] Verify you are able to perform a contribution - [x] Verify if you disable auto-contribute you are still able to tip regular sites and YouTube creators ## Update tests - [x] Verify visiting `brave://settings/help` triggers update check - [ ] Verify once update is downloaded, prompts to `Relaunch` to install update #### Components - [x] Delete Adblock folder from browser profile and restart browser. Visit `brave://components` and verify `Brave Ad Block Updater` downloads and update the component. Repeat for all Brave components ### Upgrade - [x] Make sure that data from the last version appears in the new version OK - [x] Ensure that `brave://version` lists the expected Brave & Chromium versions - [x] With data from the last version, verify that - [x] Bookmarks on the bookmark toolbar and bookmark folders can be opened - [x] Cookies are preserved - [x] Installed extensions are retained and work correctly - [x] Opened tabs can be reloaded - [x] Stored passwords are preserved - [x] Sync chain created in previous version is retained - [x] Social media blocking buttons changes are retained - [x] Rewards - [x] Wallet balance is retained - [x] Auto-contribute list is retained - [x] Both Tips and Monthly Contributions are retained - [x] Wallet panel transactions list is retained - [x] Changes to rewards settings are retained - [x] Ads - [x] Both `Estimated pending rewards` & `Ad notifications received this month` are retained - [x] Changes to ads settings are retained - [x] Ensure that ads are not being enabled when upgrading to a new version if they were disabled - [x] Ensure that ads are not disabled when upgrading to a new version if they were enabled
1.0
Manual test run on Linux for 1.18.x - ### Installer - [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave-Browser-Beta.app/` and make sure it returns `accepted`. If Windows right click on the `brave_installer-x64.exe` and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window ### Widevine - [x] Verify `Widevine Notification` is shown when you visit Netflix for the first time - [x] Test that you can stream on Netflix on a fresh profile after installing Widevine ### Rewards - [x] Verify account balance shows correct BAT and USD value - [x] Verify you are able to restore a wallet - [x] Verify actions taken (claiming grant, tipping, auto-contribute) display in wallet panel - [x] Verify when you click on the BR panel while on a site, the panel displays site specific information (site favicon, domain, attention %) - [x] Verify you are able to make one-time tip and they display in tips panel - [x] Verify you are able to make recurring tip and they display in tips panel - [x] Verify you can tip a verified publisher - [x] Verify you can tip a verified YouTube creator - [x] Verify you are able to perform a contribution - [x] Verify if you disable auto-contribute you are still able to tip regular sites and YouTube creators ## Update tests - [x] Verify visiting `brave://settings/help` triggers update check - [ ] Verify once update is downloaded, prompts to `Relaunch` to install update #### Components - [x] Delete Adblock folder from browser profile and restart browser. Visit `brave://components` and verify `Brave Ad Block Updater` downloads and update the component. Repeat for all Brave components ### Upgrade - [x] Make sure that data from the last version appears in the new version OK - [x] Ensure that `brave://version` lists the expected Brave & Chromium versions - [x] With data from the last version, verify that - [x] Bookmarks on the bookmark toolbar and bookmark folders can be opened - [x] Cookies are preserved - [x] Installed extensions are retained and work correctly - [x] Opened tabs can be reloaded - [x] Stored passwords are preserved - [x] Sync chain created in previous version is retained - [x] Social media blocking buttons changes are retained - [x] Rewards - [x] Wallet balance is retained - [x] Auto-contribute list is retained - [x] Both Tips and Monthly Contributions are retained - [x] Wallet panel transactions list is retained - [x] Changes to rewards settings are retained - [x] Ads - [x] Both `Estimated pending rewards` & `Ad notifications received this month` are retained - [x] Changes to ads settings are retained - [x] Ensure that ads are not being enabled when upgrading to a new version if they were disabled - [x] Ensure that ads are not disabled when upgrading to a new version if they were enabled
non_process
manual test run on linux for x installer check signature if os run spctl assess verbose applications brave browser beta app and make sure it returns accepted if windows right click on the brave installer exe and go to properties go to the digital signatures tab and double click on the signature make sure it says the digital signature is ok in the popup window widevine verify widevine notification is shown when you visit netflix for the first time test that you can stream on netflix on a fresh profile after installing widevine rewards verify account balance shows correct bat and usd value verify you are able to restore a wallet verify actions taken claiming grant tipping auto contribute display in wallet panel verify when you click on the br panel while on a site the panel displays site specific information site favicon domain attention verify you are able to make one time tip and they display in tips panel verify you are able to make recurring tip and they display in tips panel verify you can tip a verified publisher verify you can tip a verified youtube creator verify you are able to perform a contribution verify if you disable auto contribute you are still able to tip regular sites and youtube creators update tests verify visiting brave settings help triggers update check verify once update is downloaded prompts to relaunch to install update components delete adblock folder from browser profile and restart browser visit brave components and verify brave ad block updater downloads and update the component repeat for all brave components upgrade make sure that data from the last version appears in the new version ok ensure that brave version lists the expected brave chromium versions with data from the last version verify that bookmarks on the bookmark toolbar and bookmark folders can be opened cookies are preserved installed extensions are retained and work correctly opened tabs can be reloaded stored passwords are preserved sync chain created in previous version is retained social media blocking buttons changes are retained rewards wallet balance is retained auto contribute list is retained both tips and monthly contributions are retained wallet panel transactions list is retained changes to rewards settings are retained ads both estimated pending rewards ad notifications received this month are retained changes to ads settings are retained ensure that ads are not being enabled when upgrading to a new version if they were disabled ensure that ads are not disabled when upgrading to a new version if they were enabled
0
16,019
20,188,227,949
IssuesEvent
2022-02-11 01:19:47
savitamittalmsft/WAS-SEC-TEST
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
opened
Regularly simulate attacks against critical accounts
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Deployment & Testing Testing & Validation
<a href="https://docs.microsoft.com/azure/architecture/framework/Security/critical-impact-accounts#attack-simulation-for-critical-impact-accounts">Regularly simulate attacks against critical accounts</a> <p><b>Why Consider This?</b></p> People are a critical part of your defense, especially those with elevated permissions. Ensuring they have the knowledge and skills to avoid and resist attacks will reduce your overall organizational risk. <p><b>Context</b></p> <p><span>If the organization has Microsoft Defender for Office 365 Plan 2, which includes Threat Investigation and Response capabilities, you can use Attack Simulator in the Security "amp; Compliance Center to run realistic attack scenarios in your organization. These simulated attacks can help you find and educate vulnerable users before a real attack impacts your bottom line. </span></p><p><span>Evaluate the available tools and design a proactive approach to simulating user attacks to prepare for real-world events.</span></p> <p><b>Suggested Actions</b></p> <p><span>Evaluate the current toolset available such as Office 365 ATP attack simulator and regularly simulate attacks against critical accounts to prepare them for real-world events.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/critical-impact-accounts#attack-simulation-for-critical-impact-accounts" target="_blank"><span>Attack simulation for critical impact accounts</span></a><span /></p><p><a href="https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/attack-simulator?view=o365-worldwide" target="_blank"><span>ATP attack simulator</span></a><span /></p><p><span> </span></p>
1.0
Regularly simulate attacks against critical accounts - <a href="https://docs.microsoft.com/azure/architecture/framework/Security/critical-impact-accounts#attack-simulation-for-critical-impact-accounts">Regularly simulate attacks against critical accounts</a> <p><b>Why Consider This?</b></p> People are a critical part of your defense, especially those with elevated permissions. Ensuring they have the knowledge and skills to avoid and resist attacks will reduce your overall organizational risk. <p><b>Context</b></p> <p><span>If the organization has Microsoft Defender for Office 365 Plan 2, which includes Threat Investigation and Response capabilities, you can use Attack Simulator in the Security "amp; Compliance Center to run realistic attack scenarios in your organization. These simulated attacks can help you find and educate vulnerable users before a real attack impacts your bottom line. </span></p><p><span>Evaluate the available tools and design a proactive approach to simulating user attacks to prepare for real-world events.</span></p> <p><b>Suggested Actions</b></p> <p><span>Evaluate the current toolset available such as Office 365 ATP attack simulator and regularly simulate attacks against critical accounts to prepare them for real-world events.</span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/Security/critical-impact-accounts#attack-simulation-for-critical-impact-accounts" target="_blank"><span>Attack simulation for critical impact accounts</span></a><span /></p><p><a href="https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/attack-simulator?view=o365-worldwide" target="_blank"><span>ATP attack simulator</span></a><span /></p><p><span> </span></p>
process
regularly simulate attacks against critical accounts why consider this people are a critical part of your defense especially those with elevated permissions ensuring they have the knowledge and skills to avoid and resist attacks will reduce your overall organizational risk context if the organization has microsoft defender for office plan which includes threat investigation and response capabilities you can use attack simulator in the security amp compliance center to run realistic attack scenarios in your organization these simulated attacks can help you find and educate vulnerable users before a real attack impacts your bottom line evaluate the available tools and design a proactive approach to simulating user attacks to prepare for real world events suggested actions evaluate the current toolset available such as office atp attack simulator and regularly simulate attacks against critical accounts to prepare them for real world events learn more attack simulation for critical impact accounts atp attack simulator
1
22,445
31,164,792,242
IssuesEvent
2023-08-16 18:46:39
Azure/azure-sdk-tools
https://api.github.com/repos/Azure/azure-sdk-tools
closed
Increase TypeSpec and Swagger APIView usage
Epic APIView Central-EngSys TypeSpec WS: Process Tools & Automation Swagger
Work involved: ```[tasklist] ### Tasks - [ ] https://github.com/Azure/azure-sdk-tools/issues/5273 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5377 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5521 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5573 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5751 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5740 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5833 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5831 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5851 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5755 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5941 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5955 - [ ] https://github.com/Azure/azure-sdk-tools/issues/6174 - [ ] https://github.com/Azure/azure-sdk-tools/issues/6108 - [ ] https://github.com/Azure/azure-sdk-tools/issues/6295 ```
1.0
Increase TypeSpec and Swagger APIView usage - Work involved: ```[tasklist] ### Tasks - [ ] https://github.com/Azure/azure-sdk-tools/issues/5273 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5377 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5521 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5573 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5751 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5740 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5833 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5831 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5851 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5755 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5941 - [ ] https://github.com/Azure/azure-sdk-tools/issues/5955 - [ ] https://github.com/Azure/azure-sdk-tools/issues/6174 - [ ] https://github.com/Azure/azure-sdk-tools/issues/6108 - [ ] https://github.com/Azure/azure-sdk-tools/issues/6295 ```
process
increase typespec and swagger apiview usage work involved tasks
1
20,456
27,123,513,872
IssuesEvent
2023-02-16 02:00:07
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Thu, 16 Feb 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Efficient Teacher: Semi-Supervised Object Detection for YOLOv5 - **Authors:** Bowen Xu, Mingtao Chen, Wenlong Guan, Lulu Hu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.07577 - **Pdf link:** https://arxiv.org/pdf/2302.07577 - **Abstract** Semi-Supervised Object Detection (SSOD) has been successful in improving the performance of both R-CNN series and anchor-free detectors. However, one-stage anchor-based detectors lack the structure to generate high-quality or flexible pseudo labels, leading to serious inconsistency problems in SSOD, such as YOLOv5. In this paper, we propose the Efficient Teacher framework for scalable and effective one-stage anchor-based SSOD training, consisting of Dense Detector, Pseudo Label Assigner, and Epoch Adaptor. Dense Detector is a baseline model that extends RetinaNet with dense sampling techniques inspired by YOLOv5. The Efficient Teacher framework introduces a novel pseudo label assignment mechanism, named Pseudo Label Assigner, which makes more refined use of pseudo labels from Dense Detector. Epoch Adaptor is a method that enables a stable and efficient end-to-end semi-supervised training schedule for Dense Detector. The Pseudo Label Assigner prevents the occurrence of bias caused by a large number of low-quality pseudo labels that may interfere with the Dense Detector during the student-teacher mutual learning mechanism, and the Epoch Adaptor utilizes domain and distribution adaptation to allow Dense Detector to learn globally distributed consistent features, making the training independent of the proportion of labeled data. Our experiments show that the Efficient Teacher framework achieves state-of-the-art results on VOC, COCO-standard, and COCO-additional using fewer FLOPs than previous methods. To the best of our knowledge, this is the first attempt to apply Semi-Supervised Object Detection to YOLOv5. ### CERiL: Continuous Event-based Reinforcement Learning - **Authors:** Celyn Walters, Simon Hadfield - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2302.07667 - **Pdf link:** https://arxiv.org/pdf/2302.07667 - **Abstract** This paper explores the potential of event cameras to enable continuous time reinforcement learning. We formalise this problem where a continuous stream of unsynchronised observations is used to produce a corresponding stream of output actions for the environment. This lack of synchronisation enables greatly enhanced reactivity. We present a method to train on event streams derived from standard RL environments, thereby solving the proposed continuous time RL problem. The CERiL algorithm uses specialised network layers which operate directly on an event stream, rather than aggregating events into quantised image frames. We show the advantages of event streams over less-frequent RGB images. The proposed system outperforms networks typically used in RL, even succeeding at tasks which cannot be solved traditionally. We also demonstrate the value of our CERiL approach over a standard SNN baseline using event streams. ## Keyword: event camera ### CERiL: Continuous Event-based Reinforcement Learning - **Authors:** Celyn Walters, Simon Hadfield - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2302.07667 - **Pdf link:** https://arxiv.org/pdf/2302.07667 - **Abstract** This paper explores the potential of event cameras to enable continuous time reinforcement learning. We formalise this problem where a continuous stream of unsynchronised observations is used to produce a corresponding stream of output actions for the environment. This lack of synchronisation enables greatly enhanced reactivity. We present a method to train on event streams derived from standard RL environments, thereby solving the proposed continuous time RL problem. The CERiL algorithm uses specialised network layers which operate directly on an event stream, rather than aggregating events into quantised image frames. We show the advantages of event streams over less-frequent RGB images. The proposed system outperforms networks typically used in RL, even succeeding at tasks which cannot be solved traditionally. We also demonstrate the value of our CERiL approach over a standard SNN baseline using event streams. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### Depth- and Semantics-aware Multi-modal Domain Translation: Generating 3D Panoramic Color Images from LiDAR Point Clouds - **Authors:** Tiago Cortinhal, Eren Erdal Aksoy - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.07661 - **Pdf link:** https://arxiv.org/pdf/2302.07661 - **Abstract** This work presents a new depth- and semantics-aware conditional generative model, named TITAN-Next, for cross-domain image-to-image translation in a multi-modal setup between LiDAR and camera sensors. The proposed model leverages scene semantics as a mid-level representation and is able to translate raw LiDAR point clouds to RGB-D camera images by solely relying on semantic scene segments. We claim that this is the first framework of its kind and it has practical applications in autonomous vehicles such as providing a fail-safe mechanism and augmenting available data in the target image domain. The proposed model is evaluated on the large-scale and challenging Semantic-KITTI dataset, and experimental findings show that it considerably outperforms the original TITAN-Net and other strong baselines by 23.7$\%$ margin in terms of IoU. ## Keyword: raw image There is no result
2.0
New submissions for Thu, 16 Feb 23 - ## Keyword: events ### Efficient Teacher: Semi-Supervised Object Detection for YOLOv5 - **Authors:** Bowen Xu, Mingtao Chen, Wenlong Guan, Lulu Hu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.07577 - **Pdf link:** https://arxiv.org/pdf/2302.07577 - **Abstract** Semi-Supervised Object Detection (SSOD) has been successful in improving the performance of both R-CNN series and anchor-free detectors. However, one-stage anchor-based detectors lack the structure to generate high-quality or flexible pseudo labels, leading to serious inconsistency problems in SSOD, such as YOLOv5. In this paper, we propose the Efficient Teacher framework for scalable and effective one-stage anchor-based SSOD training, consisting of Dense Detector, Pseudo Label Assigner, and Epoch Adaptor. Dense Detector is a baseline model that extends RetinaNet with dense sampling techniques inspired by YOLOv5. The Efficient Teacher framework introduces a novel pseudo label assignment mechanism, named Pseudo Label Assigner, which makes more refined use of pseudo labels from Dense Detector. Epoch Adaptor is a method that enables a stable and efficient end-to-end semi-supervised training schedule for Dense Detector. The Pseudo Label Assigner prevents the occurrence of bias caused by a large number of low-quality pseudo labels that may interfere with the Dense Detector during the student-teacher mutual learning mechanism, and the Epoch Adaptor utilizes domain and distribution adaptation to allow Dense Detector to learn globally distributed consistent features, making the training independent of the proportion of labeled data. Our experiments show that the Efficient Teacher framework achieves state-of-the-art results on VOC, COCO-standard, and COCO-additional using fewer FLOPs than previous methods. To the best of our knowledge, this is the first attempt to apply Semi-Supervised Object Detection to YOLOv5. ### CERiL: Continuous Event-based Reinforcement Learning - **Authors:** Celyn Walters, Simon Hadfield - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2302.07667 - **Pdf link:** https://arxiv.org/pdf/2302.07667 - **Abstract** This paper explores the potential of event cameras to enable continuous time reinforcement learning. We formalise this problem where a continuous stream of unsynchronised observations is used to produce a corresponding stream of output actions for the environment. This lack of synchronisation enables greatly enhanced reactivity. We present a method to train on event streams derived from standard RL environments, thereby solving the proposed continuous time RL problem. The CERiL algorithm uses specialised network layers which operate directly on an event stream, rather than aggregating events into quantised image frames. We show the advantages of event streams over less-frequent RGB images. The proposed system outperforms networks typically used in RL, even succeeding at tasks which cannot be solved traditionally. We also demonstrate the value of our CERiL approach over a standard SNN baseline using event streams. ## Keyword: event camera ### CERiL: Continuous Event-based Reinforcement Learning - **Authors:** Celyn Walters, Simon Hadfield - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2302.07667 - **Pdf link:** https://arxiv.org/pdf/2302.07667 - **Abstract** This paper explores the potential of event cameras to enable continuous time reinforcement learning. We formalise this problem where a continuous stream of unsynchronised observations is used to produce a corresponding stream of output actions for the environment. This lack of synchronisation enables greatly enhanced reactivity. We present a method to train on event streams derived from standard RL environments, thereby solving the proposed continuous time RL problem. The CERiL algorithm uses specialised network layers which operate directly on an event stream, rather than aggregating events into quantised image frames. We show the advantages of event streams over less-frequent RGB images. The proposed system outperforms networks typically used in RL, even succeeding at tasks which cannot be solved traditionally. We also demonstrate the value of our CERiL approach over a standard SNN baseline using event streams. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP There is no result ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### Depth- and Semantics-aware Multi-modal Domain Translation: Generating 3D Panoramic Color Images from LiDAR Point Clouds - **Authors:** Tiago Cortinhal, Eren Erdal Aksoy - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2302.07661 - **Pdf link:** https://arxiv.org/pdf/2302.07661 - **Abstract** This work presents a new depth- and semantics-aware conditional generative model, named TITAN-Next, for cross-domain image-to-image translation in a multi-modal setup between LiDAR and camera sensors. The proposed model leverages scene semantics as a mid-level representation and is able to translate raw LiDAR point clouds to RGB-D camera images by solely relying on semantic scene segments. We claim that this is the first framework of its kind and it has practical applications in autonomous vehicles such as providing a fail-safe mechanism and augmenting available data in the target image domain. The proposed model is evaluated on the large-scale and challenging Semantic-KITTI dataset, and experimental findings show that it considerably outperforms the original TITAN-Net and other strong baselines by 23.7$\%$ margin in terms of IoU. ## Keyword: raw image There is no result
process
new submissions for thu feb keyword events efficient teacher semi supervised object detection for authors bowen xu mingtao chen wenlong guan lulu hu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract semi supervised object detection ssod has been successful in improving the performance of both r cnn series and anchor free detectors however one stage anchor based detectors lack the structure to generate high quality or flexible pseudo labels leading to serious inconsistency problems in ssod such as in this paper we propose the efficient teacher framework for scalable and effective one stage anchor based ssod training consisting of dense detector pseudo label assigner and epoch adaptor dense detector is a baseline model that extends retinanet with dense sampling techniques inspired by the efficient teacher framework introduces a novel pseudo label assignment mechanism named pseudo label assigner which makes more refined use of pseudo labels from dense detector epoch adaptor is a method that enables a stable and efficient end to end semi supervised training schedule for dense detector the pseudo label assigner prevents the occurrence of bias caused by a large number of low quality pseudo labels that may interfere with the dense detector during the student teacher mutual learning mechanism and the epoch adaptor utilizes domain and distribution adaptation to allow dense detector to learn globally distributed consistent features making the training independent of the proportion of labeled data our experiments show that the efficient teacher framework achieves state of the art results on voc coco standard and coco additional using fewer flops than previous methods to the best of our knowledge this is the first attempt to apply semi supervised object detection to ceril continuous event based reinforcement learning authors celyn walters simon hadfield subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract this paper explores the potential of event cameras to enable continuous time reinforcement learning we formalise this problem where a continuous stream of unsynchronised observations is used to produce a corresponding stream of output actions for the environment this lack of synchronisation enables greatly enhanced reactivity we present a method to train on event streams derived from standard rl environments thereby solving the proposed continuous time rl problem the ceril algorithm uses specialised network layers which operate directly on an event stream rather than aggregating events into quantised image frames we show the advantages of event streams over less frequent rgb images the proposed system outperforms networks typically used in rl even succeeding at tasks which cannot be solved traditionally we also demonstrate the value of our ceril approach over a standard snn baseline using event streams keyword event camera ceril continuous event based reinforcement learning authors celyn walters simon hadfield subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract this paper explores the potential of event cameras to enable continuous time reinforcement learning we formalise this problem where a continuous stream of unsynchronised observations is used to produce a corresponding stream of output actions for the environment this lack of synchronisation enables greatly enhanced reactivity we present a method to train on event streams derived from standard rl environments thereby solving the proposed continuous time rl problem the ceril algorithm uses specialised network layers which operate directly on an event stream rather than aggregating events into quantised image frames we show the advantages of event streams over less frequent rgb images the proposed system outperforms networks typically used in rl even succeeding at tasks which cannot be solved traditionally we also demonstrate the value of our ceril approach over a standard snn baseline using event streams keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp there is no result keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw depth and semantics aware multi modal domain translation generating panoramic color images from lidar point clouds authors tiago cortinhal eren erdal aksoy subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this work presents a new depth and semantics aware conditional generative model named titan next for cross domain image to image translation in a multi modal setup between lidar and camera sensors the proposed model leverages scene semantics as a mid level representation and is able to translate raw lidar point clouds to rgb d camera images by solely relying on semantic scene segments we claim that this is the first framework of its kind and it has practical applications in autonomous vehicles such as providing a fail safe mechanism and augmenting available data in the target image domain the proposed model is evaluated on the large scale and challenging semantic kitti dataset and experimental findings show that it considerably outperforms the original titan net and other strong baselines by margin in terms of iou keyword raw image there is no result
1
6,889
10,028,552,245
IssuesEvent
2019-07-17 11:59:31
linnovate/root
https://api.github.com/repos/linnovate/root
reopened
Sort by Title option in Settings -> Folders/Offices/Template Documents not sort by alphabet
2.0.7 Process bug Settings
go to Settings -> Folders/Offices/Template Documents create few items and named them choose the option sort by Title the list not sort by alphabet - example of special but also happens in regular: ![image](https://user-images.githubusercontent.com/45143091/52913431-d3cf4380-32c6-11e9-9915-5d726768d283.png)
1.0
Sort by Title option in Settings -> Folders/Offices/Template Documents not sort by alphabet - go to Settings -> Folders/Offices/Template Documents create few items and named them choose the option sort by Title the list not sort by alphabet - example of special but also happens in regular: ![image](https://user-images.githubusercontent.com/45143091/52913431-d3cf4380-32c6-11e9-9915-5d726768d283.png)
process
sort by title option in settings folders offices template documents not sort by alphabet go to settings folders offices template documents create few items and named them choose the option sort by title the list not sort by alphabet example of special but also happens in regular
1
10,463
13,240,821,294
IssuesEvent
2020-08-19 07:07:59
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
Nicer error message for schema with missing data source
kind/improvement process/candidate team/engines topic: errors
Internal context: https://prisma-company.slack.com/archives/C4GCG53BP/p1597674794047900 Schemas now require a datasource for `generate`, otherwise you get a panic like this: ``` FAIL src/__tests__/getGenerators/getGenerators.test.ts ● getGenerators › basic Schema parsing thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', query-engine/query-engine/src/cli.rs:91:48 stack backtrace: 0: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt 1: core::fmt::write 2: std::io::Write::write_fmt 3: std::panicking::default_hook::{{closure}} 4: std::panicking::default_hook 5: std::panicking::rust_panic_with_hook 6: rust_begin_unwind 7: core::panicking::panic_fmt 8: core::panicking::panic 9: query_engine::main::main::{{closure}}::main::{{closure}} 10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 11: std::thread::local::LocalKey<T>::with 12: scoped_tls::ScopedKey<T>::set 13: async_executor::LocalExecutor::run 14: scoped_tls::ScopedKey<T>::set 15: tokio::runtime::context::enter 16: tokio::runtime::Runtime::enter 17: std::thread::local::LocalKey<T>::with 18: std::thread::local::LocalKey<T>::with 19: async_std::task::builder::Builder::blocking 20: query_engine::main 21: std::rt::lang_start::{{closure}} 22: std::rt::lang_start_internal 23: main note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. ``` We should have a helpful error message instead.
1.0
Nicer error message for schema with missing data source - Internal context: https://prisma-company.slack.com/archives/C4GCG53BP/p1597674794047900 Schemas now require a datasource for `generate`, otherwise you get a panic like this: ``` FAIL src/__tests__/getGenerators/getGenerators.test.ts ● getGenerators › basic Schema parsing thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', query-engine/query-engine/src/cli.rs:91:48 stack backtrace: 0: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt 1: core::fmt::write 2: std::io::Write::write_fmt 3: std::panicking::default_hook::{{closure}} 4: std::panicking::default_hook 5: std::panicking::rust_panic_with_hook 6: rust_begin_unwind 7: core::panicking::panic_fmt 8: core::panicking::panic 9: query_engine::main::main::{{closure}}::main::{{closure}} 10: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 11: std::thread::local::LocalKey<T>::with 12: scoped_tls::ScopedKey<T>::set 13: async_executor::LocalExecutor::run 14: scoped_tls::ScopedKey<T>::set 15: tokio::runtime::context::enter 16: tokio::runtime::Runtime::enter 17: std::thread::local::LocalKey<T>::with 18: std::thread::local::LocalKey<T>::with 19: async_std::task::builder::Builder::blocking 20: query_engine::main 21: std::rt::lang_start::{{closure}} 22: std::rt::lang_start_internal 23: main note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. ``` We should have a helpful error message instead.
process
nicer error message for schema with missing data source internal context schemas now require a datasource for generate otherwise you get a panic like this fail src tests getgenerators getgenerators test ts ● getgenerators › basic schema parsing thread main panicked at called option unwrap on a none value query engine query engine src cli rs stack backtrace fmt core fmt write std io write write fmt std panicking default hook closure std panicking default hook std panicking rust panic with hook rust begin unwind core panicking panic fmt core panicking panic query engine main main closure main closure as core future future future poll std thread local localkey with scoped tls scopedkey set async executor localexecutor run scoped tls scopedkey set tokio runtime context enter tokio runtime runtime enter std thread local localkey with std thread local localkey with async std task builder builder blocking query engine main std rt lang start closure std rt lang start internal main note some details are omitted run with rust backtrace full for a verbose backtrace we should have a helpful error message instead
1
214,814
7,277,039,837
IssuesEvent
2018-02-21 18:09:54
Mandiklopper/People-Connect
https://api.github.com/repos/Mandiklopper/People-Connect
opened
Incomplete updates to X3 Employee records with approved ESS change requests
Emergency Priority
Some requests to update employee’s details via ESS do not update the records on X3 even after approval by Supervisors and HRBPs. This is very worrisome and gives me serious cause for great concern. The example below is a case in point that should be investigated as a matter of urgency. I have painstakingly reviewed ESS workflow monitor records to extract details of change requests that were made and approved (between 15 & 17 January, 2018) and reviewed these against the employee’s record of the staff in question –A20801. The following WFM sequence numbers of 15th February, 2018, reviewed did not completely update the employee's record on X3; - 1800008129 (15 Feb) - 1800008130 (15 Feb) -1800008132 (15 Feb) -1800008306 (15 Feb) -1800008316 (15 Feb) -1800008340 (15 Feb) - 1800008341 (15 Feb) -1800008343 (15 Feb) - 1800008351 (15 Feb) -1800009998 (16 Feb) -1800010026 (16 Feb) Regards,
1.0
Incomplete updates to X3 Employee records with approved ESS change requests - Some requests to update employee’s details via ESS do not update the records on X3 even after approval by Supervisors and HRBPs. This is very worrisome and gives me serious cause for great concern. The example below is a case in point that should be investigated as a matter of urgency. I have painstakingly reviewed ESS workflow monitor records to extract details of change requests that were made and approved (between 15 & 17 January, 2018) and reviewed these against the employee’s record of the staff in question –A20801. The following WFM sequence numbers of 15th February, 2018, reviewed did not completely update the employee's record on X3; - 1800008129 (15 Feb) - 1800008130 (15 Feb) -1800008132 (15 Feb) -1800008306 (15 Feb) -1800008316 (15 Feb) -1800008340 (15 Feb) - 1800008341 (15 Feb) -1800008343 (15 Feb) - 1800008351 (15 Feb) -1800009998 (16 Feb) -1800010026 (16 Feb) Regards,
non_process
incomplete updates to employee records with approved ess change requests some requests to update employee’s details via ess do not update the records on even after approval by supervisors and hrbps this is very worrisome and gives me serious cause for great concern the example below is a case in point that should be investigated as a matter of urgency i have painstakingly reviewed ess workflow monitor records to extract details of change requests that were made and approved between january and reviewed these against the employee’s record of the staff in question – the following wfm sequence numbers of february reviewed did not completely update the employee s record on feb feb feb feb feb feb feb feb feb feb feb regards
0
153,241
5,887,755,726
IssuesEvent
2017-05-17 08:23:48
dwyl/hq
https://api.github.com/repos/dwyl/hq
opened
Business Plan
priority-1
This has been brought up a number of times before, but keeps popping up everywhere (e.g. https://github.com/dwyl/hq/issues/325#issuecomment-301987599) so it's time we made an issue about it! dwyl needs a thorough and written down business plan for the core organisation and various arms of the organisation. We can start with the basics for the core business and each of the businesses in the org: - Mission Statement - Vision Statement - Values & Guiding Principles - Problem - Solution - Value Proposition - Customers - Markets - Partnerships
1.0
Business Plan - This has been brought up a number of times before, but keeps popping up everywhere (e.g. https://github.com/dwyl/hq/issues/325#issuecomment-301987599) so it's time we made an issue about it! dwyl needs a thorough and written down business plan for the core organisation and various arms of the organisation. We can start with the basics for the core business and each of the businesses in the org: - Mission Statement - Vision Statement - Values & Guiding Principles - Problem - Solution - Value Proposition - Customers - Markets - Partnerships
non_process
business plan this has been brought up a number of times before but keeps popping up everywhere e g so it s time we made an issue about it dwyl needs a thorough and written down business plan for the core organisation and various arms of the organisation we can start with the basics for the core business and each of the businesses in the org mission statement vision statement values guiding principles problem solution value proposition customers markets partnerships
0
15,928
20,147,470,101
IssuesEvent
2022-02-09 09:06:56
CMPT756-A5-Org-Patel-Dhruv/MYC756PROJECT
https://api.github.com/repos/CMPT756-A5-Org-Patel-Dhruv/MYC756PROJECT
opened
Update values with average values for VISA_BALANCE Column
preprocessing 6 months dataset
Write a python code to replace 0 in the visa_balance column with average values <img width="73" alt="visa balance" src="https://user-images.githubusercontent.com/97414622/153161751-e51d6f20-7bf1-421d-8580-9871b8f03af3.PNG">
1.0
Update values with average values for VISA_BALANCE Column - Write a python code to replace 0 in the visa_balance column with average values <img width="73" alt="visa balance" src="https://user-images.githubusercontent.com/97414622/153161751-e51d6f20-7bf1-421d-8580-9871b8f03af3.PNG">
process
update values with average values for visa balance column write a python code to replace in the visa balance column with average values img width alt visa balance src
1
2,707
5,577,521,225
IssuesEvent
2017-03-28 09:52:15
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Bazel-Publish-Site and Bazel-Push-Benchmark-Output is failing
breakage category: misc > misc P1 type: process
My blog post https://github.com/bazelbuild/bazel/blob/master/site/blog/_posts/2017-03-21-design-of-skylark.md doesn't appear no the website (https://bazel.build/blog/). Anyone knows why? @davidzchen
1.0
Bazel-Publish-Site and Bazel-Push-Benchmark-Output is failing - My blog post https://github.com/bazelbuild/bazel/blob/master/site/blog/_posts/2017-03-21-design-of-skylark.md doesn't appear no the website (https://bazel.build/blog/). Anyone knows why? @davidzchen
process
bazel publish site and bazel push benchmark output is failing my blog post doesn t appear no the website anyone knows why davidzchen
1
19,103
25,149,664,257
IssuesEvent
2022-11-10 09:02:45
goblint/analyzer
https://api.github.com/repos/goblint/analyzer
opened
Do not preprocess `.i` files
feature performance preprocessing good first issue
`gcc` man page describes its behavior depending on the input file extension: ``` file.c C source code that must be preprocessed. file.i C source code that should not be preprocessed. ``` We could also follow this convention and skip executing the preprocessor subprocess on `.i` files, where it's completely pointless. This is the case for many SV-COMP tasks, although it probably won't make a performance difference. Since SV-COMP tasks are always supposed to be preprocessed, even as `.c` files, we could even have an option that assumes any input to be preprocessed regardless of extension. Then we wouldn't even have the GCC dependency in SV-COMP.
1.0
Do not preprocess `.i` files - `gcc` man page describes its behavior depending on the input file extension: ``` file.c C source code that must be preprocessed. file.i C source code that should not be preprocessed. ``` We could also follow this convention and skip executing the preprocessor subprocess on `.i` files, where it's completely pointless. This is the case for many SV-COMP tasks, although it probably won't make a performance difference. Since SV-COMP tasks are always supposed to be preprocessed, even as `.c` files, we could even have an option that assumes any input to be preprocessed regardless of extension. Then we wouldn't even have the GCC dependency in SV-COMP.
process
do not preprocess i files gcc man page describes its behavior depending on the input file extension file c c source code that must be preprocessed file i c source code that should not be preprocessed we could also follow this convention and skip executing the preprocessor subprocess on i files where it s completely pointless this is the case for many sv comp tasks although it probably won t make a performance difference since sv comp tasks are always supposed to be preprocessed even as c files we could even have an option that assumes any input to be preprocessed regardless of extension then we wouldn t even have the gcc dependency in sv comp
1
14,076
16,945,528,415
IssuesEvent
2021-06-28 06:06:06
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] App fails to navigate to overview screen on signout/delete account
Bug P1 Process: Fixed Process: Tested dev iOS
**Steps:** For signout 1. Signup/Login 2. Enroll into the study 3. Click on menu and signout 4. Observe For Account Delete 1. Signup/Login 2. Enroll into the study 3. Navigate to My account 4. Click on Delete account 5. Click on Proceed to delete account 6. Observe **Actual:** App stucks on the my account/menu screen itself and if user performs some actions backend error is observed. **Expected:** App should be navigated to the overview screen Note: Issue is observed in Gateway app If user kill and relaunches the app, user will be signed out/account will be deleted https://user-images.githubusercontent.com/60386291/117776348-17704800-b259-11eb-8eeb-8cbd00988d0f.MOV
2.0
[iOS] App fails to navigate to overview screen on signout/delete account - **Steps:** For signout 1. Signup/Login 2. Enroll into the study 3. Click on menu and signout 4. Observe For Account Delete 1. Signup/Login 2. Enroll into the study 3. Navigate to My account 4. Click on Delete account 5. Click on Proceed to delete account 6. Observe **Actual:** App stucks on the my account/menu screen itself and if user performs some actions backend error is observed. **Expected:** App should be navigated to the overview screen Note: Issue is observed in Gateway app If user kill and relaunches the app, user will be signed out/account will be deleted https://user-images.githubusercontent.com/60386291/117776348-17704800-b259-11eb-8eeb-8cbd00988d0f.MOV
process
app fails to navigate to overview screen on signout delete account steps for signout signup login enroll into the study click on menu and signout observe for account delete signup login enroll into the study navigate to my account click on delete account click on proceed to delete account observe actual app stucks on the my account menu screen itself and if user performs some actions backend error is observed expected app should be navigated to the overview screen note issue is observed in gateway app if user kill and relaunches the app user will be signed out account will be deleted
1
14,512
17,606,260,041
IssuesEvent
2021-08-17 17:28:56
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
reopened
Test failure in System.Diagnostics.Process.Tests
area-System.Diagnostics.Process blocking-clean-ci
``` C:\h\w\A8F309AF\w\A8620915\e>"C:\h\w\A8F309AF\p\dotnet.exe" exec --runtimeconfig System.Diagnostics.Process.Tests.runtimeconfig.json --depsfile System.Diagnostics.Process.Tests.deps.json xunit.console.dll System.Diagnostics.Process.Tests.dll -xml testResults.xml -nologo -nocolor -notrait category=IgnoreForCI -notrait category=OuterLoop -notrait category=failing Discovering: System.Diagnostics.Process.Tests (method display = ClassAndMethod, method display options = None) Discovered: System.Diagnostics.Process.Tests (found 254 of 278 test cases) Starting: System.Diagnostics.Process.Tests (parallel test collections = on, max threads = 4) System.Diagnostics.Tests.ProcessModuleTests.LongModuleFileNamesAreSupported [FAIL] Assert.Contains() Failure Not found: C:\h\w\A8F309AF\t\ProcessModuleTests_otibjh1e.nwu\aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\LongPath.dll In value: String[] ["C:\\h\\w\\A8F309AF\\p\\dotnet.exe", "C:\\Windows\\SYSTEM32\\ntdll.dll", "C:\\Windows\\System32\\KERNEL32.DLL", "C:\\Windows\\System32\\KERNELBASE.dll", "C:\\Windows\\System32\\ucrtbase.dll", ...] Stack Trace: /_/src/libraries/System.Diagnostics.Process/tests/ProcessModuleTests.cs(127,0): at System.Diagnostics.Tests.ProcessModuleTests.LongModuleFileNamesAreSupported() System.Diagnostics.Tests.ProcessStartInfoTests.ShellExecute_Nano_Fails_Start [SKIP] Condition(s) not met: "IsWindowsNanoServer" Invalid number of parameters 0 File(s) copied Finished: System.Diagnostics.Process.Tests === TEST EXECUTION SUMMARY === ```
1.0
Test failure in System.Diagnostics.Process.Tests - ``` C:\h\w\A8F309AF\w\A8620915\e>"C:\h\w\A8F309AF\p\dotnet.exe" exec --runtimeconfig System.Diagnostics.Process.Tests.runtimeconfig.json --depsfile System.Diagnostics.Process.Tests.deps.json xunit.console.dll System.Diagnostics.Process.Tests.dll -xml testResults.xml -nologo -nocolor -notrait category=IgnoreForCI -notrait category=OuterLoop -notrait category=failing Discovering: System.Diagnostics.Process.Tests (method display = ClassAndMethod, method display options = None) Discovered: System.Diagnostics.Process.Tests (found 254 of 278 test cases) Starting: System.Diagnostics.Process.Tests (parallel test collections = on, max threads = 4) System.Diagnostics.Tests.ProcessModuleTests.LongModuleFileNamesAreSupported [FAIL] Assert.Contains() Failure Not found: C:\h\w\A8F309AF\t\ProcessModuleTests_otibjh1e.nwu\aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\LongPath.dll In value: String[] ["C:\\h\\w\\A8F309AF\\p\\dotnet.exe", "C:\\Windows\\SYSTEM32\\ntdll.dll", "C:\\Windows\\System32\\KERNEL32.DLL", "C:\\Windows\\System32\\KERNELBASE.dll", "C:\\Windows\\System32\\ucrtbase.dll", ...] Stack Trace: /_/src/libraries/System.Diagnostics.Process/tests/ProcessModuleTests.cs(127,0): at System.Diagnostics.Tests.ProcessModuleTests.LongModuleFileNamesAreSupported() System.Diagnostics.Tests.ProcessStartInfoTests.ShellExecute_Nano_Fails_Start [SKIP] Condition(s) not met: "IsWindowsNanoServer" Invalid number of parameters 0 File(s) copied Finished: System.Diagnostics.Process.Tests === TEST EXECUTION SUMMARY === ```
process
test failure in system diagnostics process tests c h w w e c h w p dotnet exe exec runtimeconfig system diagnostics process tests runtimeconfig json depsfile system diagnostics process tests deps json xunit console dll system diagnostics process tests dll xml testresults xml nologo nocolor notrait category ignoreforci notrait category outerloop notrait category failing discovering system diagnostics process tests method display classandmethod method display options none discovered system diagnostics process tests found of test cases starting system diagnostics process tests parallel test collections on max threads system diagnostics tests processmoduletests longmodulefilenamesaresupported assert contains failure not found c h w t processmoduletests nwu aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa longpath dll in value string stack trace src libraries system diagnostics process tests processmoduletests cs at system diagnostics tests processmoduletests longmodulefilenamesaresupported system diagnostics tests processstartinfotests shellexecute nano fails start condition s not met iswindowsnanoserver invalid number of parameters file s copied finished system diagnostics process tests test execution summary
1
2,502
5,272,883,537
IssuesEvent
2017-02-06 14:14:51
Hurence/logisland
https://api.github.com/repos/Hurence/logisland
opened
add Timezone managmt to SplitText
enhancement processor
add a property called timezone and use in record_time detection
1.0
add Timezone managmt to SplitText - add a property called timezone and use in record_time detection
process
add timezone managmt to splittext add a property called timezone and use in record time detection
1
99
2,537,874,641
IssuesEvent
2015-01-26 23:44:14
tinkerpop/tinkerpop3
https://api.github.com/repos/tinkerpop/tinkerpop3
closed
[proposal] Provide cap(String...)
enhancement process
This way, people can do this: ```java g.V.group('a').by('age').out('nationality').groupCount('b').by('name').cap('a','b') ``` They will get a `Map<String,Object>` of their sideEffects. @dkuppitz @rjbriody
1.0
[proposal] Provide cap(String...) - This way, people can do this: ```java g.V.group('a').by('age').out('nationality').groupCount('b').by('name').cap('a','b') ``` They will get a `Map<String,Object>` of their sideEffects. @dkuppitz @rjbriody
process
provide cap string this way people can do this java g v group a by age out nationality groupcount b by name cap a b they will get a map of their sideeffects dkuppitz rjbriody
1
20,953
27,815,701,205
IssuesEvent
2023-03-18 17:01:33
cse442-at-ub/project_s23-atomic
https://api.github.com/repos/cse442-at-ub/project_s23-atomic
closed
Connect Sign-In Frontend to Backend
Processing Task Sprint 2
**Task Tests** *Test 1* 1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/ and verify you are taken to the landing page 2. Right click (or click the trackpad with two fingers) and inspect the page (modify its size so it doesn't completely mess the layout of the page :/) 3. View the console by clicking the console tab in the inspection menu 4. Click the sign up button and verify you are taken to a sign-up page that says to "Create Your Account" 5. Fill in the input boxes with the credentials: Username [icry], Email [cry@gmail.com], Password [12345678], Confirm Password [12345678]. Click the Continue button once you are done 6. Verify you are taken to a new page to choose your habits and the console prints a success message 7. Choose 3 or more habits to track and click the Next button 8. Verify you are sent to the home page and the console prints another success message
1.0
Connect Sign-In Frontend to Backend - **Task Tests** *Test 1* 1. Go to https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442q/ and verify you are taken to the landing page 2. Right click (or click the trackpad with two fingers) and inspect the page (modify its size so it doesn't completely mess the layout of the page :/) 3. View the console by clicking the console tab in the inspection menu 4. Click the sign up button and verify you are taken to a sign-up page that says to "Create Your Account" 5. Fill in the input boxes with the credentials: Username [icry], Email [cry@gmail.com], Password [12345678], Confirm Password [12345678]. Click the Continue button once you are done 6. Verify you are taken to a new page to choose your habits and the console prints a success message 7. Choose 3 or more habits to track and click the Next button 8. Verify you are sent to the home page and the console prints another success message
process
connect sign in frontend to backend task tests test go to and verify you are taken to the landing page right click or click the trackpad with two fingers and inspect the page modify its size so it doesn t completely mess the layout of the page view the console by clicking the console tab in the inspection menu click the sign up button and verify you are taken to a sign up page that says to create your account fill in the input boxes with the credentials username email password confirm password click the continue button once you are done verify you are taken to a new page to choose your habits and the console prints a success message choose or more habits to track and click the next button verify you are sent to the home page and the console prints another success message
1
16,448
21,327,213,802
IssuesEvent
2022-04-18 01:30:54
nodejs/node
https://api.github.com/repos/nodejs/node
closed
child_process.execSync should return object with std of plain stdout
child_process feature request stale
<!-- Thank you for suggesting an idea to make Node.js better. Please fill in as much of the template below as you're able. --> **Is your feature request related to a problem? Please describe.** I wanted to run sequentially a series of processes and check their exit code and stderr to make a decision whether the run command failed or not. In my case exit code=0 and anything in stderr having `warning` substring should be considered as fail. When I needed to run just one command I happily used the following code: ```js childProcess.exec(command, (err, stdout, stderr) => { if (err || (stderr && stderr.toLowerCase().includes('warning'))) { console.error('Failed due to:'); console.error(stderr); process.exit(1); } console.log('OK\n'); process.exit(0); }); ``` When the task changed and I needed to run few commands in a row I decided to use synchronous version of that method: ```js try { const stdout = childProcess.execSync(command, { encoding: 'utf8' }); // can't get stderr } catch (err) { const { status, stderr } = err; if (status > 0 || (stderr && stderr.toLowerCase().includes('warning'))) { console.error('Failed due to:'); console.error(stderr); process.exit(1); } } console.log('OK'); ``` However using that code I can't check stderr when no exception is thrown. **Describe the solution you'd like** I'd like `childProcess.execSync` to return an object containing `status`/`code`, `stderr`, `stdout` and other fields similar to returned object in `child_process.spawnSync` (https://nodejs.org/api/child_process.html#child_process_child_process_spawnsync_command_args_options): ``` * pid <number> Pid of the child process. * output <Array> Array of results from stdio output. * stdout <Buffer> | <string> The contents of output[1]. * stderr <Buffer> | <string> The contents of output[2]. * status <number> | <null> The exit code of the subprocess, or null if the subprocess terminated due to a signal. * signal <string> | <null> The signal used to kill the subprocess, or null if the subprocess did not terminate due to a signal. * error <Error> The error object if the child process failed or timed out. ``` **Describe alternatives you've considered** Alternatives: 1) Use asynchronous version (`childProcess.exec`) and rewrite the code to work in asynchronous manner 2) Use `childProcess.spawnSync` with `shell=true`
1.0
child_process.execSync should return object with std of plain stdout - <!-- Thank you for suggesting an idea to make Node.js better. Please fill in as much of the template below as you're able. --> **Is your feature request related to a problem? Please describe.** I wanted to run sequentially a series of processes and check their exit code and stderr to make a decision whether the run command failed or not. In my case exit code=0 and anything in stderr having `warning` substring should be considered as fail. When I needed to run just one command I happily used the following code: ```js childProcess.exec(command, (err, stdout, stderr) => { if (err || (stderr && stderr.toLowerCase().includes('warning'))) { console.error('Failed due to:'); console.error(stderr); process.exit(1); } console.log('OK\n'); process.exit(0); }); ``` When the task changed and I needed to run few commands in a row I decided to use synchronous version of that method: ```js try { const stdout = childProcess.execSync(command, { encoding: 'utf8' }); // can't get stderr } catch (err) { const { status, stderr } = err; if (status > 0 || (stderr && stderr.toLowerCase().includes('warning'))) { console.error('Failed due to:'); console.error(stderr); process.exit(1); } } console.log('OK'); ``` However using that code I can't check stderr when no exception is thrown. **Describe the solution you'd like** I'd like `childProcess.execSync` to return an object containing `status`/`code`, `stderr`, `stdout` and other fields similar to returned object in `child_process.spawnSync` (https://nodejs.org/api/child_process.html#child_process_child_process_spawnsync_command_args_options): ``` * pid <number> Pid of the child process. * output <Array> Array of results from stdio output. * stdout <Buffer> | <string> The contents of output[1]. * stderr <Buffer> | <string> The contents of output[2]. * status <number> | <null> The exit code of the subprocess, or null if the subprocess terminated due to a signal. * signal <string> | <null> The signal used to kill the subprocess, or null if the subprocess did not terminate due to a signal. * error <Error> The error object if the child process failed or timed out. ``` **Describe alternatives you've considered** Alternatives: 1) Use asynchronous version (`childProcess.exec`) and rewrite the code to work in asynchronous manner 2) Use `childProcess.spawnSync` with `shell=true`
process
child process execsync should return object with std of plain stdout thank you for suggesting an idea to make node js better please fill in as much of the template below as you re able is your feature request related to a problem please describe i wanted to run sequentially a series of processes and check their exit code and stderr to make a decision whether the run command failed or not in my case exit code and anything in stderr having warning substring should be considered as fail when i needed to run just one command i happily used the following code js childprocess exec command err stdout stderr if err stderr stderr tolowercase includes warning console error failed due to console error stderr process exit console log ok n process exit when the task changed and i needed to run few commands in a row i decided to use synchronous version of that method js try const stdout childprocess execsync command encoding can t get stderr catch err const status stderr err if status stderr stderr tolowercase includes warning console error failed due to console error stderr process exit console log ok however using that code i can t check stderr when no exception is thrown describe the solution you d like i d like childprocess execsync to return an object containing status code stderr stdout and other fields similar to returned object in child process spawnsync pid pid of the child process output array of results from stdio output stdout the contents of output stderr the contents of output status the exit code of the subprocess or null if the subprocess terminated due to a signal signal the signal used to kill the subprocess or null if the subprocess did not terminate due to a signal error the error object if the child process failed or timed out describe alternatives you ve considered alternatives use asynchronous version childprocess exec and rewrite the code to work in asynchronous manner use childprocess spawnsync with shell true
1
73,132
31,880,342,527
IssuesEvent
2023-09-16 09:49:20
UpTalent/AuthService
https://api.github.com/repos/UpTalent/AuthService
closed
Write Tests for AuthService
Microservice High
### Description Write Tests for AuthService ### Tasks - [x] Registrate new User successfully - [x] Registrate new User with earlier occupied email - [x] Registrate new User and forget input some data - [x] Verify User account successfully - [x] Verify User account with non-found token - [x] Log in successfully - [x] Log in with max reached attempts - [x] Log in when already authorized - [x] Log in when account is blocked - [x] Log in when account is not verified - [x] Logout successfully
1.0
Write Tests for AuthService - ### Description Write Tests for AuthService ### Tasks - [x] Registrate new User successfully - [x] Registrate new User with earlier occupied email - [x] Registrate new User and forget input some data - [x] Verify User account successfully - [x] Verify User account with non-found token - [x] Log in successfully - [x] Log in with max reached attempts - [x] Log in when already authorized - [x] Log in when account is blocked - [x] Log in when account is not verified - [x] Logout successfully
non_process
write tests for authservice description write tests for authservice tasks registrate new user successfully registrate new user with earlier occupied email registrate new user and forget input some data verify user account successfully verify user account with non found token log in successfully log in with max reached attempts log in when already authorized log in when account is blocked log in when account is not verified logout successfully
0
201,193
15,802,020,203
IssuesEvent
2021-04-03 07:38:37
benedictkhoomw/ped
https://api.github.com/repos/benedictkhoomw/ped
opened
UG: Typo in markD example
severity.VeryLow type.DocumentationBug
## How to reproduce 1. Open UG 1. Navigate to `markD` section 1. View the example subsection ## Expected result `markD 1 i/1` marks the first deadline of the first project as done. ## Actual result `markD 1 i/1 marks the first deadline of the first project as done. ![image.png](https://raw.githubusercontent.com/benedictkhoomw/ped/main/files/45e08585-577f-4e7e-b77e-1af514f2ac65.png) <!--session: 1617429872510-d49d3224-781c-4182-8c46-5e1178736ef2-->
1.0
UG: Typo in markD example - ## How to reproduce 1. Open UG 1. Navigate to `markD` section 1. View the example subsection ## Expected result `markD 1 i/1` marks the first deadline of the first project as done. ## Actual result `markD 1 i/1 marks the first deadline of the first project as done. ![image.png](https://raw.githubusercontent.com/benedictkhoomw/ped/main/files/45e08585-577f-4e7e-b77e-1af514f2ac65.png) <!--session: 1617429872510-d49d3224-781c-4182-8c46-5e1178736ef2-->
non_process
ug typo in markd example how to reproduce open ug navigate to markd section view the example subsection expected result markd i marks the first deadline of the first project as done actual result markd i marks the first deadline of the first project as done
0
2,367
5,167,319,116
IssuesEvent
2017-01-17 18:28:31
rubberduck-vba/Rubberduck
https://api.github.com/repos/rubberduck-vba/Rubberduck
closed
Identifier names with special characters should display with their square-brackets
enhancement parse-tree-processing
An evil identifier like this: ```vb Enum fizz [Dim Option(Base 0 To Base 1) As Explicit Binary Case Do : Loop] = 4 End Enum Sub foo() Debug.Print [Dim Option(Base 0 To Base 1) As Explicit Binary Case Do : Loop] End Sub ``` Appears like this in the toolbar (yes, it truncates): ![text](https://cloud.githubusercontent.com/assets/16574009/20520164/df3d4da8-b0f9-11e6-933e-231fef7d4012.png) And like this in the Inspections window (yes, those inspections are ALL incorrect: ![inspect](https://cloud.githubusercontent.com/assets/16574009/20520198/ff6fb8c2-b0f9-11e6-82ba-7db6169a73f3.png)
1.0
Identifier names with special characters should display with their square-brackets - An evil identifier like this: ```vb Enum fizz [Dim Option(Base 0 To Base 1) As Explicit Binary Case Do : Loop] = 4 End Enum Sub foo() Debug.Print [Dim Option(Base 0 To Base 1) As Explicit Binary Case Do : Loop] End Sub ``` Appears like this in the toolbar (yes, it truncates): ![text](https://cloud.githubusercontent.com/assets/16574009/20520164/df3d4da8-b0f9-11e6-933e-231fef7d4012.png) And like this in the Inspections window (yes, those inspections are ALL incorrect: ![inspect](https://cloud.githubusercontent.com/assets/16574009/20520198/ff6fb8c2-b0f9-11e6-82ba-7db6169a73f3.png)
process
identifier names with special characters should display with their square brackets an evil identifier like this vb enum fizz end enum sub foo debug print end sub appears like this in the toolbar yes it truncates and like this in the inspections window yes those inspections are all incorrect
1
25,020
7,612,024,244
IssuesEvent
2018-05-01 16:01:16
GoogleCloudPlatform/google-cloud-eclipse
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-eclipse
closed
Investigate suitable version scheme for nightly builds
build
If we can somehow assign a working version to each nightly build in the way that the version is final and suitable for immediate and direct release, without any concern of confusing users or the Eclipse update system when released into the wild. Otherwise, we still need to trigger a manual cut only because we need to assign a working version. Also, it is important to make versions between nightly builds work properly when it comes to automatic updates.
1.0
Investigate suitable version scheme for nightly builds - If we can somehow assign a working version to each nightly build in the way that the version is final and suitable for immediate and direct release, without any concern of confusing users or the Eclipse update system when released into the wild. Otherwise, we still need to trigger a manual cut only because we need to assign a working version. Also, it is important to make versions between nightly builds work properly when it comes to automatic updates.
non_process
investigate suitable version scheme for nightly builds if we can somehow assign a working version to each nightly build in the way that the version is final and suitable for immediate and direct release without any concern of confusing users or the eclipse update system when released into the wild otherwise we still need to trigger a manual cut only because we need to assign a working version also it is important to make versions between nightly builds work properly when it comes to automatic updates
0
3,941
6,885,293,662
IssuesEvent
2017-11-21 15:42:28
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
New Term Request - child of GO:0016779 nucleotidyltransferase activity
auto-migrated BHF-UCL curator-request high priority New term request RNA processes
suggested name: RNA nucleotidyltransferase activity suggested definition: Catalysis of the reaction: ribonucleoside triphosphate + RNA(n) = diphosphate + RNA(n+1). example of protein to be annotated: UniProt Accession ID O14746 TERT example of publication as evidence: PMID 19701182 GOC:nc GOC:BHF Reported by: noonoo25 Original Ticket: [geneontology/ontology-requests/11402](https://sourceforge.net/p/geneontology/ontology-requests/11402)
1.0
New Term Request - child of GO:0016779 nucleotidyltransferase activity - suggested name: RNA nucleotidyltransferase activity suggested definition: Catalysis of the reaction: ribonucleoside triphosphate + RNA(n) = diphosphate + RNA(n+1). example of protein to be annotated: UniProt Accession ID O14746 TERT example of publication as evidence: PMID 19701182 GOC:nc GOC:BHF Reported by: noonoo25 Original Ticket: [geneontology/ontology-requests/11402](https://sourceforge.net/p/geneontology/ontology-requests/11402)
process
new term request child of go nucleotidyltransferase activity suggested name rna nucleotidyltransferase activity suggested definition catalysis of the reaction ribonucleoside triphosphate rna n diphosphate rna n example of protein to be annotated uniprot accession id tert example of publication as evidence pmid goc nc goc bhf reported by original ticket
1
18,929
24,883,064,029
IssuesEvent
2022-10-28 04:15:19
arcus-azure/arcus.messaging
https://api.github.com/repos/arcus-azure/arcus.messaging
closed
Reduce EventGrid publishing duplication in message handler test fixtures
enhancement good first issue area:message-processing testing
**Is your feature request related to a problem? Please describe.** We have quite some duplication on some of the message handlers used during integration testing. There is an extension on the EventGrid publisher to reduce this duplication `PublishOrderAsync`. **Describe the solution you'd like** Use the `PublishOrderAsync` extension to publish events in test fixture message handlers. This will also help us move away easier from the `IEventGridPublisher` in a later phase.
1.0
Reduce EventGrid publishing duplication in message handler test fixtures - **Is your feature request related to a problem? Please describe.** We have quite some duplication on some of the message handlers used during integration testing. There is an extension on the EventGrid publisher to reduce this duplication `PublishOrderAsync`. **Describe the solution you'd like** Use the `PublishOrderAsync` extension to publish events in test fixture message handlers. This will also help us move away easier from the `IEventGridPublisher` in a later phase.
process
reduce eventgrid publishing duplication in message handler test fixtures is your feature request related to a problem please describe we have quite some duplication on some of the message handlers used during integration testing there is an extension on the eventgrid publisher to reduce this duplication publishorderasync describe the solution you d like use the publishorderasync extension to publish events in test fixture message handlers this will also help us move away easier from the ieventgridpublisher in a later phase
1
17,569
23,383,669,071
IssuesEvent
2022-08-11 11:56:55
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
opened
[Mirror] googletest
P2 type: process team-OSS mirror request
### Please list the URLs of the archives you'd like to mirror: Just https://github.com/google/googletest/archive/refs/tags/release-1.12.1.tar.gz Thank you!
1.0
[Mirror] googletest - ### Please list the URLs of the archives you'd like to mirror: Just https://github.com/google/googletest/archive/refs/tags/release-1.12.1.tar.gz Thank you!
process
googletest please list the urls of the archives you d like to mirror just thank you
1
3,449
6,541,816,045
IssuesEvent
2017-09-01 22:03:32
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Custom expressions `cols` don't have type info
Bug Priority/P1 Query Processor
If I create a custom numeric field, I can't seem to select it for the y-axis or metric in any of our visualization types using the chart settings. Seems like MB doesn't know it's a metric? No JS errors that I can see. This is on both master and the new math branch on Chrome. Custom field: ![screen shot 2016-12-14 at 10 58 33 am](https://cloud.githubusercontent.com/assets/2223916/21196413/65059208-c1ec-11e6-9ae5-cc2924c31358.png) Shows up for this axis: ![screen shot 2016-12-14 at 10 58 06 am](https://cloud.githubusercontent.com/assets/2223916/21196429/6ddd3f52-c1ec-11e6-9680-24932ce8bcce.png) But not this one: ![screen shot 2016-12-14 at 10 58 13 am](https://cloud.githubusercontent.com/assets/2223916/21196440/78da9602-c1ec-11e6-99d3-0c9f1e9ca85e.png) Shows up here in pie chart: ![screen shot 2016-12-14 at 10 57 44 am](https://cloud.githubusercontent.com/assets/2223916/21196449/7f69f314-c1ec-11e6-9aba-412cdb6ab871.png) But not here: ![screen shot 2016-12-14 at 10 57 49 am](https://cloud.githubusercontent.com/assets/2223916/21196464/897d31fe-c1ec-11e6-9eed-f525ea4bf386.png) Scatter plot with two different custom numeric fields (the run differential ones): ![screen shot 2016-12-14 at 10 50 14 am](https://cloud.githubusercontent.com/assets/2223916/21196494/9f811d76-c1ec-11e6-96bf-a534f1848899.png) Can't use them for the y-axis: ![screen shot 2016-12-14 at 10 53 11 am](https://cloud.githubusercontent.com/assets/2223916/21196510/b25a03ea-c1ec-11e6-9d81-f3fc718ab189.png)
1.0
Custom expressions `cols` don't have type info - If I create a custom numeric field, I can't seem to select it for the y-axis or metric in any of our visualization types using the chart settings. Seems like MB doesn't know it's a metric? No JS errors that I can see. This is on both master and the new math branch on Chrome. Custom field: ![screen shot 2016-12-14 at 10 58 33 am](https://cloud.githubusercontent.com/assets/2223916/21196413/65059208-c1ec-11e6-9ae5-cc2924c31358.png) Shows up for this axis: ![screen shot 2016-12-14 at 10 58 06 am](https://cloud.githubusercontent.com/assets/2223916/21196429/6ddd3f52-c1ec-11e6-9680-24932ce8bcce.png) But not this one: ![screen shot 2016-12-14 at 10 58 13 am](https://cloud.githubusercontent.com/assets/2223916/21196440/78da9602-c1ec-11e6-99d3-0c9f1e9ca85e.png) Shows up here in pie chart: ![screen shot 2016-12-14 at 10 57 44 am](https://cloud.githubusercontent.com/assets/2223916/21196449/7f69f314-c1ec-11e6-9aba-412cdb6ab871.png) But not here: ![screen shot 2016-12-14 at 10 57 49 am](https://cloud.githubusercontent.com/assets/2223916/21196464/897d31fe-c1ec-11e6-9eed-f525ea4bf386.png) Scatter plot with two different custom numeric fields (the run differential ones): ![screen shot 2016-12-14 at 10 50 14 am](https://cloud.githubusercontent.com/assets/2223916/21196494/9f811d76-c1ec-11e6-96bf-a534f1848899.png) Can't use them for the y-axis: ![screen shot 2016-12-14 at 10 53 11 am](https://cloud.githubusercontent.com/assets/2223916/21196510/b25a03ea-c1ec-11e6-9d81-f3fc718ab189.png)
process
custom expressions cols don t have type info if i create a custom numeric field i can t seem to select it for the y axis or metric in any of our visualization types using the chart settings seems like mb doesn t know it s a metric no js errors that i can see this is on both master and the new math branch on chrome custom field shows up for this axis but not this one shows up here in pie chart but not here scatter plot with two different custom numeric fields the run differential ones can t use them for the y axis
1
11,509
14,394,126,671
IssuesEvent
2020-12-03 00:33:13
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Real-time report shows way too many hits
bug log-processing
When GoAccess is first started, the real-time report behaves correctly. However, after a day or two (I assume the issue is triggered by logrotate), each hit starts counting as multiple hits. For example, right now every time I `curl` my home page, the "hits" count on the real-time report increases by 26. ![A GIF demonstrating the issue](https://user-images.githubusercontent.com/4148577/100684495-721b8480-3348-11eb-83af-a6df82b07f50.gif) GoAccess version: 1.4.2 <details> <summary>SystemD unit file</summary> ``` [Unit] Description=GoAccess stephenwade.me Requires=network-online.target apache2.service After=network-online.target apache2.service [Service] Type=simple Restart=always ExecStart=/usr/local/bin/goaccess-stephenwade.me.sh ExecStop=/bin/kill -INT $MAINPID [Install] WantedBy=multi-user.target ``` </details> <details> <summary>goaccess-stephenwade.me.sh</summary> ```bash #!/bin/bash /usr/bin/goaccess \ /var/log/apache2/stephenwade.me_access.log \ -o /var/www/stephenwade.me/goaccess/index.html \ --log-format=COMBINED --real-time-html --port=8625 \ --ssl-cert=/etc/letsencrypt/live/stephenwade.me/fullchain.pem \ --ssl-key=/etc/letsencrypt/live/stephenwade.me/privkey.pem \ --html-report-title='stephenwade.me Server Statistics' --html-prefs='{"theme": "darkBlue"}' ``` </details>
1.0
Real-time report shows way too many hits - When GoAccess is first started, the real-time report behaves correctly. However, after a day or two (I assume the issue is triggered by logrotate), each hit starts counting as multiple hits. For example, right now every time I `curl` my home page, the "hits" count on the real-time report increases by 26. ![A GIF demonstrating the issue](https://user-images.githubusercontent.com/4148577/100684495-721b8480-3348-11eb-83af-a6df82b07f50.gif) GoAccess version: 1.4.2 <details> <summary>SystemD unit file</summary> ``` [Unit] Description=GoAccess stephenwade.me Requires=network-online.target apache2.service After=network-online.target apache2.service [Service] Type=simple Restart=always ExecStart=/usr/local/bin/goaccess-stephenwade.me.sh ExecStop=/bin/kill -INT $MAINPID [Install] WantedBy=multi-user.target ``` </details> <details> <summary>goaccess-stephenwade.me.sh</summary> ```bash #!/bin/bash /usr/bin/goaccess \ /var/log/apache2/stephenwade.me_access.log \ -o /var/www/stephenwade.me/goaccess/index.html \ --log-format=COMBINED --real-time-html --port=8625 \ --ssl-cert=/etc/letsencrypt/live/stephenwade.me/fullchain.pem \ --ssl-key=/etc/letsencrypt/live/stephenwade.me/privkey.pem \ --html-report-title='stephenwade.me Server Statistics' --html-prefs='{"theme": "darkBlue"}' ``` </details>
process
real time report shows way too many hits when goaccess is first started the real time report behaves correctly however after a day or two i assume the issue is triggered by logrotate each hit starts counting as multiple hits for example right now every time i curl my home page the hits count on the real time report increases by goaccess version systemd unit file description goaccess stephenwade me requires network online target service after network online target service type simple restart always execstart usr local bin goaccess stephenwade me sh execstop bin kill int mainpid wantedby multi user target goaccess stephenwade me sh bash bin bash usr bin goaccess var log stephenwade me access log o var www stephenwade me goaccess index html log format combined real time html port ssl cert etc letsencrypt live stephenwade me fullchain pem ssl key etc letsencrypt live stephenwade me privkey pem html report title stephenwade me server statistics html prefs theme darkblue
1
265,852
28,298,774,595
IssuesEvent
2023-04-10 02:39:41
nidhi7598/linux-4.19.72
https://api.github.com/repos/nidhi7598/linux-4.19.72
closed
CVE-2019-19768 (High) detected in linuxlinux-4.19.254 - autoclosed
Mend: dependency security vulnerability
## CVE-2019-19768 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.254</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72/commit/10a8c99e4f60044163c159867bc6f5452c1c36e5">10a8c99e4f60044163c159867bc6f5452c1c36e5</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/trace/blktrace.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/trace/blktrace.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In the Linux kernel 5.4.0-rc2, there is a use-after-free (read) in the __blk_add_trace function in kernel/trace/blktrace.c (which is used to fill out a blk_io_trace structure and place it in a per-cpu sub-buffer). <p>Publish Date: 2019-12-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19768>CVE-2019-19768</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-19768">https://nvd.nist.gov/vuln/detail/CVE-2019-19768</a></p> <p>Release Date: 2020-06-10</p> <p>Fix Resolution: kernel-doc - 3.10.0-514.76.1,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-327.88.1,4.18.0-80.18.1,4.18.0-193,3.10.0-1062.26.1,3.10.0-693.67.1;kernel-rt-core - 4.18.0-193.rt13.51;kernel-rt-debug-debuginfo - 4.18.0-193.rt13.51;kernel-abi-whitelists - 3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-193,3.10.0-693.67.1;kernel-zfcpdump-modules - 4.18.0-193,4.18.0-147.13.2;kernel-rt-trace-devel - 3.10.0-1127.8.2.rt56.1103;kernel-debug-modules-extra - 4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-147.13.2;kernel-rt-debug-kvm - 4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103;kernel-bootwrapper - 3.10.0-1062.26.1,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-957.54.1;kernel-rt-debuginfo - 4.18.0-193.rt13.51;kernel-rt-debug-modules - 4.18.0-193.rt13.51;kernel-zfcpdump-devel - 4.18.0-193,4.18.0-147.13.2;perf - 3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-193,4.18.0-193,3.10.0-327.88.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-1127.8.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-957.54.1;kernel-zfcpdump-modules-extra - 4.18.0-193,4.18.0-147.13.2;kernel-debuginfo - 3.10.0-514.76.1,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1062.26.1;kernel-debug-devel - 3.10.0-514.76.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-193,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-327.88.1,4.18.0-193,4.18.0-80.18.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,4.18.0-80.18.1;bpftool - 3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-1127.8.2,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-193,3.10.0-1127.8.2;kernel-rt-debug-core - 4.18.0-193.rt13.51;kernel-tools-libs - 3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-327.88.1,3.10.0-1127.8.2,4.18.0-193,3.10.0-693.67.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2;perf-debuginfo - 3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1062.26.1,3.10.0-1062.26.1,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-327.88.1;kernel-cross-headers - 4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-193,4.18.0-147.13.2;kernel-debug-debuginfo - 3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-693.67.1,4.18.0-193,3.10.0-514.76.1,3.10.0-327.88.1,3.10.0-957.54.1,3.10.0-1062.26.1,3.10.0-957.54.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2;kernel-debug - 3.10.0-514.76.1,3.10.0-327.88.1,4.18.0-193,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-957.54.1,4.18.0-193,4.18.0-193,3.10.0-1062.26.1,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2;kernel-devel - 4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-514.76.1,4.18.0-193,4.18.0-80.18.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,4.18.0-80.18.1,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-693.67.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2;kernel - 3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-327.88.1,3.10.0-327.88.1,4.18.0-147.13.2,4.18.0-147.13.2,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-193,4.18.0-193,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,4.18.0-193,3.10.0-514.76.1,3.10.0-693.67.1,4.18.0-193,3.10.0-1127.8.2;bpftool-debuginfo - 4.18.0-193,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-80.18.1;kpatch-patch-3_10_0-1062_12_1 - 1-2,1-2;kernel-zfcpdump-core - 4.18.0-147.13.2,4.18.0-193;kernel-debug-core - 4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193;kernel-modules-extra - 4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2;kernel-rt-debug-devel - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;python-perf - 3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-327.88.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1;kernel-core - 4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2;kernel-rt-debug - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-rt-devel - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-debuginfo-common-ppc64 - 3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-1062.26.1;python3-perf - 4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2;kernel-tools - 3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2;kernel-debug-modules - 4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2;kernel-rt-trace-kvm - 3.10.0-1127.8.2.rt56.1103;kernel-rt-debuginfo-common-x86_64 - 4.18.0-193.rt13.51;kernel-tools-libs-devel - 3.10.0-514.76.1,3.10.0-327.88.1,3.10.0-693.67.1,3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-1127.8.2,3.10.0-957.54.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-1062.26.1,3.10.0-957.54.1;kernel-modules - 4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193;kernel-tools-debuginfo - 3.10.0-1062.26.1,4.18.0-193,3.10.0-1127.8.2,4.18.0-80.18.1,3.10.0-327.88.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-693.67.1;kernel-rt-modules - 4.18.0-193.rt13.51;kernel-rt-doc - 3.10.0-1127.8.2.rt56.1103;kernel-rt-kvm - 4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103;python-perf-debuginfo - 3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-327.88.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-1062.26.1;kernel-headers - 3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1127.8.2,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,3.10.0-693.67.1,4.18.0-193,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,4.18.0-193,3.10.0-1127.8.2;kernel-rt-trace - 3.10.0-1127.8.2.rt56.1103;kernel-debuginfo-common-x86_64 - 3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-327.88.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-514.76.1,4.18.0-193,3.10.0-957.54.1;kernel-rt - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-zfcpdump - 4.18.0-147.13.2,4.18.0-193;kernel-rt-debug-modules-extra - 4.18.0-193.rt13.51;python3-perf-debuginfo - 4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193;kernel-rt-modules-extra - 4.18.0-193.rt13.51</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-19768 (High) detected in linuxlinux-4.19.254 - autoclosed - ## CVE-2019-19768 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.254</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-4.19.72/commit/10a8c99e4f60044163c159867bc6f5452c1c36e5">10a8c99e4f60044163c159867bc6f5452c1c36e5</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/trace/blktrace.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/kernel/trace/blktrace.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In the Linux kernel 5.4.0-rc2, there is a use-after-free (read) in the __blk_add_trace function in kernel/trace/blktrace.c (which is used to fill out a blk_io_trace structure and place it in a per-cpu sub-buffer). <p>Publish Date: 2019-12-12 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19768>CVE-2019-19768</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-19768">https://nvd.nist.gov/vuln/detail/CVE-2019-19768</a></p> <p>Release Date: 2020-06-10</p> <p>Fix Resolution: kernel-doc - 3.10.0-514.76.1,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-327.88.1,4.18.0-80.18.1,4.18.0-193,3.10.0-1062.26.1,3.10.0-693.67.1;kernel-rt-core - 4.18.0-193.rt13.51;kernel-rt-debug-debuginfo - 4.18.0-193.rt13.51;kernel-abi-whitelists - 3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-193,3.10.0-693.67.1;kernel-zfcpdump-modules - 4.18.0-193,4.18.0-147.13.2;kernel-rt-trace-devel - 3.10.0-1127.8.2.rt56.1103;kernel-debug-modules-extra - 4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-147.13.2;kernel-rt-debug-kvm - 4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103;kernel-bootwrapper - 3.10.0-1062.26.1,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-957.54.1;kernel-rt-debuginfo - 4.18.0-193.rt13.51;kernel-rt-debug-modules - 4.18.0-193.rt13.51;kernel-zfcpdump-devel - 4.18.0-193,4.18.0-147.13.2;perf - 3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-193,4.18.0-193,3.10.0-327.88.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-1127.8.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-957.54.1;kernel-zfcpdump-modules-extra - 4.18.0-193,4.18.0-147.13.2;kernel-debuginfo - 3.10.0-514.76.1,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1062.26.1;kernel-debug-devel - 3.10.0-514.76.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-193,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-327.88.1,4.18.0-193,4.18.0-80.18.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,4.18.0-80.18.1;bpftool - 3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-1062.26.1,3.10.0-1127.8.2,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-193,3.10.0-1127.8.2;kernel-rt-debug-core - 4.18.0-193.rt13.51;kernel-tools-libs - 3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-327.88.1,3.10.0-1127.8.2,4.18.0-193,3.10.0-693.67.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2;perf-debuginfo - 3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1062.26.1,3.10.0-1062.26.1,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-327.88.1;kernel-cross-headers - 4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-193,4.18.0-147.13.2;kernel-debug-debuginfo - 3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-693.67.1,4.18.0-193,3.10.0-514.76.1,3.10.0-327.88.1,3.10.0-957.54.1,3.10.0-1062.26.1,3.10.0-957.54.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2;kernel-debug - 3.10.0-514.76.1,3.10.0-327.88.1,4.18.0-193,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-957.54.1,4.18.0-193,4.18.0-193,3.10.0-1062.26.1,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2;kernel-devel - 4.18.0-193,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-957.54.1,4.18.0-147.13.2,3.10.0-514.76.1,4.18.0-193,4.18.0-80.18.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,4.18.0-80.18.1,3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-693.67.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2;kernel - 3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-327.88.1,3.10.0-327.88.1,4.18.0-147.13.2,4.18.0-147.13.2,3.10.0-957.54.1,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-193,4.18.0-193,3.10.0-1127.8.2,4.18.0-147.13.2,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-1127.8.2,4.18.0-193,3.10.0-514.76.1,3.10.0-693.67.1,4.18.0-193,3.10.0-1127.8.2;bpftool-debuginfo - 4.18.0-193,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-1062.26.1,4.18.0-80.18.1;kpatch-patch-3_10_0-1062_12_1 - 1-2,1-2;kernel-zfcpdump-core - 4.18.0-147.13.2,4.18.0-193;kernel-debug-core - 4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193;kernel-modules-extra - 4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2;kernel-rt-debug-devel - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;python-perf - 3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-327.88.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-514.76.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1;kernel-core - 4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2;kernel-rt-debug - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-rt-devel - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-debuginfo-common-ppc64 - 3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-1062.26.1;python3-perf - 4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2;kernel-tools - 3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-193,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1127.8.2,3.10.0-514.76.1,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-327.88.1,3.10.0-1062.26.1,4.18.0-193,3.10.0-957.54.1,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-1127.8.2;kernel-debug-modules - 4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2;kernel-rt-trace-kvm - 3.10.0-1127.8.2.rt56.1103;kernel-rt-debuginfo-common-x86_64 - 4.18.0-193.rt13.51;kernel-tools-libs-devel - 3.10.0-514.76.1,3.10.0-327.88.1,3.10.0-693.67.1,3.10.0-1062.26.1,3.10.0-1062.26.1,3.10.0-1127.8.2,3.10.0-957.54.1,3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-1127.8.2,3.10.0-514.76.1,3.10.0-957.54.1,3.10.0-1062.26.1,3.10.0-957.54.1;kernel-modules - 4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-193,4.18.0-80.18.1,4.18.0-193,4.18.0-147.13.2,4.18.0-147.13.2,4.18.0-193;kernel-tools-debuginfo - 3.10.0-1062.26.1,4.18.0-193,3.10.0-1127.8.2,4.18.0-80.18.1,3.10.0-327.88.1,4.18.0-147.13.2,3.10.0-1127.8.2,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-693.67.1;kernel-rt-modules - 4.18.0-193.rt13.51;kernel-rt-doc - 3.10.0-1127.8.2.rt56.1103;kernel-rt-kvm - 4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103;python-perf-debuginfo - 3.10.0-693.67.1,3.10.0-1127.8.2,3.10.0-957.54.1,3.10.0-1127.8.2,3.10.0-327.88.1,3.10.0-1062.26.1,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-1062.26.1;kernel-headers - 3.10.0-1062.26.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-514.76.1,4.18.0-193,4.18.0-80.18.1,4.18.0-147.13.2,3.10.0-327.88.1,3.10.0-1127.8.2,4.18.0-147.13.2,4.18.0-193,3.10.0-1062.26.1,3.10.0-693.67.1,4.18.0-193,3.10.0-1127.8.2,3.10.0-693.67.1,4.18.0-147.13.2,3.10.0-957.54.1,3.10.0-514.76.1,3.10.0-1062.26.1,4.18.0-80.18.1,3.10.0-957.54.1,4.18.0-193,3.10.0-1127.8.2;kernel-rt-trace - 3.10.0-1127.8.2.rt56.1103;kernel-debuginfo-common-x86_64 - 3.10.0-1127.8.2,3.10.0-693.67.1,3.10.0-327.88.1,4.18.0-147.13.2,4.18.0-80.18.1,3.10.0-1062.26.1,3.10.0-514.76.1,4.18.0-193,3.10.0-957.54.1;kernel-rt - 3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51,3.10.0-1127.8.2.rt56.1103,4.18.0-193.rt13.51;kernel-zfcpdump - 4.18.0-147.13.2,4.18.0-193;kernel-rt-debug-modules-extra - 4.18.0-193.rt13.51;python3-perf-debuginfo - 4.18.0-147.13.2,4.18.0-80.18.1,4.18.0-193;kernel-rt-modules-extra - 4.18.0-193.rt13.51</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files kernel trace blktrace c kernel trace blktrace c vulnerability details in the linux kernel there is a use after free read in the blk add trace function in kernel trace blktrace c which is used to fill out a blk io trace structure and place it in a per cpu sub buffer publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution kernel doc kernel rt core kernel rt debug debuginfo kernel abi whitelists kernel zfcpdump modules kernel rt trace devel kernel debug modules extra kernel rt debug kvm kernel bootwrapper kernel rt debuginfo kernel rt debug modules kernel zfcpdump devel perf kernel zfcpdump modules extra kernel debuginfo kernel debug devel bpftool kernel rt debug core kernel tools libs perf debuginfo kernel cross headers kernel debug debuginfo kernel debug kernel devel kernel bpftool debuginfo kpatch patch kernel zfcpdump core kernel debug core kernel modules extra kernel rt debug devel python perf kernel core kernel rt debug kernel rt devel kernel debuginfo common perf kernel tools kernel debug modules kernel rt trace kvm kernel rt debuginfo common kernel tools libs devel kernel modules kernel tools debuginfo kernel rt modules kernel rt doc kernel rt kvm python perf debuginfo kernel headers kernel rt trace kernel debuginfo common kernel rt kernel zfcpdump kernel rt debug modules extra perf debuginfo kernel rt modules extra step up your open source security game with mend
0
22,677
31,925,409,885
IssuesEvent
2023-09-19 01:06:02
googleapis/python-bigquery-dataframes
https://api.github.com/repos/googleapis/python-bigquery-dataframes
opened
Warning: a recent release failed
type: process
The following release PRs may have failed: * #18 - The release job failed -- check the build log. * #21 - The release job failed -- check the build log. * #18 - The release job was triggered, but has not reported back success. * #21 - The release job was triggered, but has not reported back success.
1.0
Warning: a recent release failed - The following release PRs may have failed: * #18 - The release job failed -- check the build log. * #21 - The release job failed -- check the build log. * #18 - The release job was triggered, but has not reported back success. * #21 - The release job was triggered, but has not reported back success.
process
warning a recent release failed the following release prs may have failed the release job failed check the build log the release job failed check the build log the release job was triggered but has not reported back success the release job was triggered but has not reported back success
1
735,704
25,411,303,572
IssuesEvent
2022-11-22 19:13:54
mozilla/addons-linter
https://api.github.com/repos/mozilla/addons-linter
closed
Gate background.service_worker support behind an expicit addons-linter feature flag
priority: p1
background.service_worker field was currently disallowed in extensions submitted to AMO due to the `min_manifest_version: 3` part of its JSONSchema definition. Once we enable `manifest_version: 3` support by default on AMO, `background.service_worker` would be currently allowed in MV3 extensions submitted to AMO, but Firefox support for `background.service_worker` is only enabled when its separate about:config pref is set to true and so AMO should not yet accept extensions using `background.service_worker`. This issue is tracking the addition of a new separate feature flag to the addons-linter and the new feature flag to keep `background-service_worker` support disabled by default.
1.0
Gate background.service_worker support behind an expicit addons-linter feature flag - background.service_worker field was currently disallowed in extensions submitted to AMO due to the `min_manifest_version: 3` part of its JSONSchema definition. Once we enable `manifest_version: 3` support by default on AMO, `background.service_worker` would be currently allowed in MV3 extensions submitted to AMO, but Firefox support for `background.service_worker` is only enabled when its separate about:config pref is set to true and so AMO should not yet accept extensions using `background.service_worker`. This issue is tracking the addition of a new separate feature flag to the addons-linter and the new feature flag to keep `background-service_worker` support disabled by default.
non_process
gate background service worker support behind an expicit addons linter feature flag background service worker field was currently disallowed in extensions submitted to amo due to the min manifest version part of its jsonschema definition once we enable manifest version support by default on amo background service worker would be currently allowed in extensions submitted to amo but firefox support for background service worker is only enabled when its separate about config pref is set to true and so amo should not yet accept extensions using background service worker this issue is tracking the addition of a new separate feature flag to the addons linter and the new feature flag to keep background service worker support disabled by default
0
15,315
19,423,501,219
IssuesEvent
2021-12-21 00:22:38
googleapis/google-cloud-ruby
https://api.github.com/repos/googleapis/google-cloud-ruby
closed
[Contributor docs] Document development and testing processes for google-cloud-pubsub
api: pubsub type: process
Enumerate topics and write an initial document for contributors to google-cloud-pubsub. At a high level, this should include at least: * How to run local unit tests * How to set up and run integration/acceptance/samples tests, including remote project setup, fixtures, and connections to other services * How to write tests for new features * Other things we check during CI (e.g. rubocop, yard tests, etc.) * What is expected when opening a pull request (e.g. conventional commits, CLA) * Anything else you can think of
1.0
[Contributor docs] Document development and testing processes for google-cloud-pubsub - Enumerate topics and write an initial document for contributors to google-cloud-pubsub. At a high level, this should include at least: * How to run local unit tests * How to set up and run integration/acceptance/samples tests, including remote project setup, fixtures, and connections to other services * How to write tests for new features * Other things we check during CI (e.g. rubocop, yard tests, etc.) * What is expected when opening a pull request (e.g. conventional commits, CLA) * Anything else you can think of
process
document development and testing processes for google cloud pubsub enumerate topics and write an initial document for contributors to google cloud pubsub at a high level this should include at least how to run local unit tests how to set up and run integration acceptance samples tests including remote project setup fixtures and connections to other services how to write tests for new features other things we check during ci e g rubocop yard tests etc what is expected when opening a pull request e g conventional commits cla anything else you can think of
1
5,069
7,869,528,944
IssuesEvent
2018-06-24 15:00:43
StrikeNP/trac_test
https://api.github.com/repos/StrikeNP/trac_test
closed
Add useful plots and get rid of useless plots on plotgen (Trac #824)
Migrated from Trac bmg2@uwm.edu post_processing task
Plotgen could be improved by adding more useful panels to it. However, that also increases the amount of space that each plotgen ```.maff``` files takes up, and we don't want to go over the attachment limit for Trac. In order to make room for new plots, useless panels should be taken off plotgen. An example of a useless panel would be graupel mixing ratio for the FIRE stratocumulus case. Attachments: [plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff) [plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff) [plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff) [plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff) [plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff) [plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff) [plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff) [plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff) [plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff) Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/824 ```json { "status": "closed", "changetime": "2018-06-05T18:22:00", "description": "Plotgen could be improved by adding more useful panels to it. However, that also increases the amount of space that each plotgen {{{.maff}}} files takes up, and we don't want to go over the attachment limit for Trac. In order to make room for new plots, useless panels should be taken off plotgen. An example of a useless panel would be graupel mixing ratio for the FIRE stratocumulus case.", "reporter": "bmg2@uwm.edu", "cc": "vlarson@uwm.edu", "resolution": "fixed", "_ts": "1528222920626922", "component": "post_processing", "summary": "Add useful plots and get rid of useless plots on plotgen", "priority": "minor", "keywords": "new plots", "time": "2018-04-19T21:46:14", "milestone": "Improve Plotgen", "owner": "bmg2@uwm.edu", "type": "task" } ```
1.0
Add useful plots and get rid of useless plots on plotgen (Trac #824) - Plotgen could be improved by adding more useful panels to it. However, that also increases the amount of space that each plotgen ```.maff``` files takes up, and we don't want to go over the attachment limit for Trac. In order to make room for new plots, useless panels should be taken off plotgen. An example of a useless panel would be graupel mixing ratio for the FIRE stratocumulus case. Attachments: [plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff) [plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff) [plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff) [plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff) [plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff) [plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff) [plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff) [plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff) [plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff) Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/824 ```json { "status": "closed", "changetime": "2018-06-05T18:22:00", "description": "Plotgen could be improved by adding more useful panels to it. However, that also increases the amount of space that each plotgen {{{.maff}}} files takes up, and we don't want to go over the attachment limit for Trac. In order to make room for new plots, useless panels should be taken off plotgen. An example of a useless panel would be graupel mixing ratio for the FIRE stratocumulus case.", "reporter": "bmg2@uwm.edu", "cc": "vlarson@uwm.edu", "resolution": "fixed", "_ts": "1528222920626922", "component": "post_processing", "summary": "Add useful plots and get rid of useless plots on plotgen", "priority": "minor", "keywords": "new plots", "time": "2018-04-19T21:46:14", "milestone": "Improve Plotgen", "owner": "bmg2@uwm.edu", "type": "task" } ```
process
add useful plots and get rid of useless plots on plotgen trac plotgen could be improved by adding more useful panels to it however that also increases the amount of space that each plotgen maff files takes up and we don t want to go over the attachment limit for trac in order to make room for new plots useless panels should be taken off plotgen an example of a useless panel would be graupel mixing ratio for the fire stratocumulus case attachments migrated from json status closed changetime description plotgen could be improved by adding more useful panels to it however that also increases the amount of space that each plotgen maff files takes up and we don t want to go over the attachment limit for trac in order to make room for new plots useless panels should be taken off plotgen an example of a useless panel would be graupel mixing ratio for the fire stratocumulus case reporter uwm edu cc vlarson uwm edu resolution fixed ts component post processing summary add useful plots and get rid of useless plots on plotgen priority minor keywords new plots time milestone improve plotgen owner uwm edu type task
1
559,580
16,565,731,211
IssuesEvent
2021-05-29 11:14:21
sopra-fs21-group-11/sopra-server
https://api.github.com/repos/sopra-fs21-group-11/sopra-server
closed
S10: As a user I want to be able to chat with players which are in the same game as me
medium priority user story
- [ ] as soon as a game participant writes a message, his name and message should be visible by others that are playing the game with me
1.0
S10: As a user I want to be able to chat with players which are in the same game as me - - [ ] as soon as a game participant writes a message, his name and message should be visible by others that are playing the game with me
non_process
as a user i want to be able to chat with players which are in the same game as me as soon as a game participant writes a message his name and message should be visible by others that are playing the game with me
0
19,817
26,206,251,074
IssuesEvent
2023-01-03 23:03:12
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Sorting by Custom Columns on MongoDB does not work
Type:Bug Priority:P2 Database/Mongo Querying/Processor .Backend Querying/Notebook/Custom Column
**Describe the bug** ![image](https://user-images.githubusercontent.com/965260/181052314-d0fa751c-2c65-477e-affb-01bc22347c5f.png) ![image](https://user-images.githubusercontent.com/965260/181052344-1d0cbf83-05c8-43dc-bd25-f3ad8e02686f.png) Database used: MongoDB Metabase v0.43.4. **Severity** Somewhat problematic - We were planning to use this combined with alerts to get us a sense of the top values for this query. If we can't resolve this we'll have to figure out some other way.
1.0
Sorting by Custom Columns on MongoDB does not work - **Describe the bug** ![image](https://user-images.githubusercontent.com/965260/181052314-d0fa751c-2c65-477e-affb-01bc22347c5f.png) ![image](https://user-images.githubusercontent.com/965260/181052344-1d0cbf83-05c8-43dc-bd25-f3ad8e02686f.png) Database used: MongoDB Metabase v0.43.4. **Severity** Somewhat problematic - We were planning to use this combined with alerts to get us a sense of the top values for this query. If we can't resolve this we'll have to figure out some other way.
process
sorting by custom columns on mongodb does not work describe the bug database used mongodb metabase severity somewhat problematic we were planning to use this combined with alerts to get us a sense of the top values for this query if we can t resolve this we ll have to figure out some other way
1
727,348
25,032,205,946
IssuesEvent
2022-11-04 13:20:35
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.nytimes.com - desktop site instead of mobile site
browser-firefox priority-critical os-mac engine-gecko
<!-- @browser: Firefox 106.0 --> <!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:106.0) Gecko/20100101 Firefox/106.0 --> <!-- @reported_with: unknown --> **URL**: https://www.nytimes.com/2022/11/02/us/detroit-halloween-arson.html **Browser / Version**: Firefox 106.0 **Operating System**: Mac OS X 10.15 **Tested Another Browser**: Yes Safari **Problem type**: Desktop site instead of mobile site **Description**: Desktop site instead of mobile site **Steps to Reproduce**: In reader view, the leading photo for many (maybe most or all, I have seen it twice so far?) articles show up duplicated. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/11/8179401c-070d-45b5-8601-aa59097d7159.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.nytimes.com - desktop site instead of mobile site - <!-- @browser: Firefox 106.0 --> <!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:106.0) Gecko/20100101 Firefox/106.0 --> <!-- @reported_with: unknown --> **URL**: https://www.nytimes.com/2022/11/02/us/detroit-halloween-arson.html **Browser / Version**: Firefox 106.0 **Operating System**: Mac OS X 10.15 **Tested Another Browser**: Yes Safari **Problem type**: Desktop site instead of mobile site **Description**: Desktop site instead of mobile site **Steps to Reproduce**: In reader view, the leading photo for many (maybe most or all, I have seen it twice so far?) articles show up duplicated. <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2022/11/8179401c-070d-45b5-8601-aa59097d7159.jpeg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
desktop site instead of mobile site url browser version firefox operating system mac os x tested another browser yes safari problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce in reader view the leading photo for many maybe most or all i have seen it twice so far articles show up duplicated view the screenshot img alt screenshot src browser configuration none from with ❤️
0
200,921
15,164,937,426
IssuesEvent
2021-02-12 14:25:27
EscherLabs/Graphene
https://api.github.com/repos/EscherLabs/Graphene
closed
Graphene Workflows: Switching Tabs while Editing Workflow Form Breaks Things
awaiting test bug high priority workflows
@alikemaltanriverdi just added the screen recording for a way to duplicate this bug. You can find the link below: https://drive.google.com/drive/folders/1t2xOYn2AUOUfVqWgYGH5WIeB4lqnW6D6?usp=sharing Workflow Link: https://my.binghamton.edu/admin/workflows/8#forms
1.0
Graphene Workflows: Switching Tabs while Editing Workflow Form Breaks Things - @alikemaltanriverdi just added the screen recording for a way to duplicate this bug. You can find the link below: https://drive.google.com/drive/folders/1t2xOYn2AUOUfVqWgYGH5WIeB4lqnW6D6?usp=sharing Workflow Link: https://my.binghamton.edu/admin/workflows/8#forms
non_process
graphene workflows switching tabs while editing workflow form breaks things alikemaltanriverdi just added the screen recording for a way to duplicate this bug you can find the link below workflow link
0
19,816
26,203,507,212
IssuesEvent
2023-01-03 19:55:59
keras-team/keras-cv
https://api.github.com/repos/keras-team/keras-cv
reopened
Add augment_bounding_boxes support to RandomCutOut layer
contribution-welcome preprocessing
The augment_bounding_boxes should be implemented for RandomCutOut Layer in keras_cv. The PR should contain implementation, test scripts and a demo script to verify implementation. Example code for implementing augment_bounding_boxes() can be found here - https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_flip.py#:~:text=def%20augment_bounding_boxes(,)%3A - https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_rotation.py#:~:text=def%20augment_image(self%2C%20image%2C%20transformation%2C%20**kwargs)%3A - The implementations can be verified using demo utils in keras_cv.bounding_box - Example of demo script can be found here : https://github.com/keras-team/keras-cv/blob/master/examples/layers/preprocessing/bounding_box/random_rotation_demo.py
1.0
Add augment_bounding_boxes support to RandomCutOut layer - The augment_bounding_boxes should be implemented for RandomCutOut Layer in keras_cv. The PR should contain implementation, test scripts and a demo script to verify implementation. Example code for implementing augment_bounding_boxes() can be found here - https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_flip.py#:~:text=def%20augment_bounding_boxes(,)%3A - https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_rotation.py#:~:text=def%20augment_image(self%2C%20image%2C%20transformation%2C%20**kwargs)%3A - The implementations can be verified using demo utils in keras_cv.bounding_box - Example of demo script can be found here : https://github.com/keras-team/keras-cv/blob/master/examples/layers/preprocessing/bounding_box/random_rotation_demo.py
process
add augment bounding boxes support to randomcutout layer the augment bounding boxes should be implemented for randomcutout layer in keras cv the pr should contain implementation test scripts and a demo script to verify implementation example code for implementing augment bounding boxes can be found here the implementations can be verified using demo utils in keras cv bounding box example of demo script can be found here
1
242
4,805,207,049
IssuesEvent
2016-11-02 15:30:24
jemalloc/jemalloc
https://api.github.com/repos/jemalloc/jemalloc
closed
imx6ul run jemalloc come out Alignment trap: not handling instruction e1b24f9f at [<76f015c8>] Unhandled fault: alignment exception (0x001) at 0x76f2564
irreproducible portability
Alignment trap: not handling instruction e1b24f9f at [<76f015c8>] Unhandled fault: alignment exception (0x001) at 0x76f2564
True
imx6ul run jemalloc come out Alignment trap: not handling instruction e1b24f9f at [<76f015c8>] Unhandled fault: alignment exception (0x001) at 0x76f2564 - Alignment trap: not handling instruction e1b24f9f at [<76f015c8>] Unhandled fault: alignment exception (0x001) at 0x76f2564
non_process
run jemalloc come out alignment trap not handling instruction at unhandled fault alignment exception at alignment trap not handling instruction at unhandled fault alignment exception at
0
570,988
17,023,221,032
IssuesEvent
2021-07-03 00:55:26
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Commands to copy and paste the last set of tags
Component: merkaartor Priority: minor Resolution: fixed Type: enhancement
**[Submitted to the original trac issue database at 11.40pm, Sunday, 16th March 2008]** When I trace over aerial images, I often draw the same type of way several times, so a way to repeat the tagging involved would be nice. I suggest Copy Style/Paste Style commands.
1.0
Commands to copy and paste the last set of tags - **[Submitted to the original trac issue database at 11.40pm, Sunday, 16th March 2008]** When I trace over aerial images, I often draw the same type of way several times, so a way to repeat the tagging involved would be nice. I suggest Copy Style/Paste Style commands.
non_process
commands to copy and paste the last set of tags when i trace over aerial images i often draw the same type of way several times so a way to repeat the tagging involved would be nice i suggest copy style paste style commands
0
3,386
6,515,351,230
IssuesEvent
2017-08-26 14:37:39
nodejs/node
https://api.github.com/repos/nodejs/node
closed
FR: child_process.exec(cmd, { additionalEnv })
child_process feature request help wanted mentor-available
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: * * **Platform**: * * **Subsystem**: child_process <!-- Enter your issue details below this comment. --> `child_process.exec(cmd, { env: Object.assign({}, process.env, {NEW_VAR:1}) })` is a very common pattern. IMHO adding an `{ additionalEnv }` option that implements this pattern, will make the API more complete, and less error prone.
1.0
FR: child_process.exec(cmd, { additionalEnv }) - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: * * **Platform**: * * **Subsystem**: child_process <!-- Enter your issue details below this comment. --> `child_process.exec(cmd, { env: Object.assign({}, process.env, {NEW_VAR:1}) })` is a very common pattern. IMHO adding an `{ additionalEnv }` option that implements this pattern, will make the API more complete, and less error prone.
process
fr child process exec cmd additionalenv thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform subsystem child process child process exec cmd env object assign process env new var is a very common pattern imho adding an additionalenv option that implements this pattern will make the api more complete and less error prone
1
19,998
26,472,226,083
IssuesEvent
2023-01-17 08:25:29
darktable-org/darktable
https://api.github.com/repos/darktable-org/darktable
closed
FR: Multiple presets for a single module
feature: new difficulty: hard scope: UI scope: image processing
**Is your feature request related to a problem? Please describe.** Some modules like diffuse & sharpen have several use cases and the user will apply multiple instances of the same module. Currently you can store only one preset per module that will be auto-applied. **Describe the solution you'd like** User can store multiple presets with name (like named module instances) that can be auto-applied to an image. If the preset would have a name, that could be used to differentiate between different conditions when the module is applied. Currently you cannot have same settings applied with different conditions. You have to also tweak the module parameters so that they are not 1 to 1 if the other presets. **Alternatives** Current workaround is to define multiple instances as styles.
1.0
FR: Multiple presets for a single module - **Is your feature request related to a problem? Please describe.** Some modules like diffuse & sharpen have several use cases and the user will apply multiple instances of the same module. Currently you can store only one preset per module that will be auto-applied. **Describe the solution you'd like** User can store multiple presets with name (like named module instances) that can be auto-applied to an image. If the preset would have a name, that could be used to differentiate between different conditions when the module is applied. Currently you cannot have same settings applied with different conditions. You have to also tweak the module parameters so that they are not 1 to 1 if the other presets. **Alternatives** Current workaround is to define multiple instances as styles.
process
fr multiple presets for a single module is your feature request related to a problem please describe some modules like diffuse sharpen have several use cases and the user will apply multiple instances of the same module currently you can store only one preset per module that will be auto applied describe the solution you d like user can store multiple presets with name like named module instances that can be auto applied to an image if the preset would have a name that could be used to differentiate between different conditions when the module is applied currently you cannot have same settings applied with different conditions you have to also tweak the module parameters so that they are not to if the other presets alternatives current workaround is to define multiple instances as styles
1
84,989
10,577,848,700
IssuesEvent
2019-10-07 21:02:53
mozilla-mobile/fenix
https://api.github.com/repos/mozilla-mobile/fenix
closed
[Bug] Favicons in History and Bookmarks are too small for long-sighted users
needs:UX-feedback ux:visual-design 🐞 bug
The favicons in History and Bookmarks are tiny, hardly bigger than a capital letter O in the normal font. I'm pretty long-sighted (getting old). At a distance where I can read the URL text the favicons mostly look like coloured squares to me, can't see much detail inside them. Particularly a problem with the dark theme where I find the circles really "pop" and I mostly see a white circle with a small coloured square inside & some indistinct graphics. Aesthetically as well I dislike the circles around the favicons, it seems unnecessary & isn't used elsewhere in Fenix or in other Mozilla browsers? Could the favicons on History and Bookmarks be the same size as the favicons in my Collections?
1.0
[Bug] Favicons in History and Bookmarks are too small for long-sighted users - The favicons in History and Bookmarks are tiny, hardly bigger than a capital letter O in the normal font. I'm pretty long-sighted (getting old). At a distance where I can read the URL text the favicons mostly look like coloured squares to me, can't see much detail inside them. Particularly a problem with the dark theme where I find the circles really "pop" and I mostly see a white circle with a small coloured square inside & some indistinct graphics. Aesthetically as well I dislike the circles around the favicons, it seems unnecessary & isn't used elsewhere in Fenix or in other Mozilla browsers? Could the favicons on History and Bookmarks be the same size as the favicons in my Collections?
non_process
favicons in history and bookmarks are too small for long sighted users the favicons in history and bookmarks are tiny hardly bigger than a capital letter o in the normal font i m pretty long sighted getting old at a distance where i can read the url text the favicons mostly look like coloured squares to me can t see much detail inside them particularly a problem with the dark theme where i find the circles really pop and i mostly see a white circle with a small coloured square inside some indistinct graphics aesthetically as well i dislike the circles around the favicons it seems unnecessary isn t used elsewhere in fenix or in other mozilla browsers could the favicons on history and bookmarks be the same size as the favicons in my collections
0
249,631
21,181,263,643
IssuesEvent
2022-04-08 08:11:17
hzi-braunschweig/SORMAS-Project
https://api.github.com/repos/hzi-braunschweig/SORMAS-Project
opened
Fix failing performance
testing task e2e-tests
After Entities updates, the performance tests are failing and we need to have them stable in order to be able to measure the APis response time to collect an average time for them, which will be used for the controllers performance tests that we need to create in order to monitor their performance. Please investigate the problems and fix the tests.
2.0
Fix failing performance - After Entities updates, the performance tests are failing and we need to have them stable in order to be able to measure the APis response time to collect an average time for them, which will be used for the controllers performance tests that we need to create in order to monitor their performance. Please investigate the problems and fix the tests.
non_process
fix failing performance after entities updates the performance tests are failing and we need to have them stable in order to be able to measure the apis response time to collect an average time for them which will be used for the controllers performance tests that we need to create in order to monitor their performance please investigate the problems and fix the tests
0
2,019
4,839,139,853
IssuesEvent
2016-11-09 08:13:33
openvstorage/alba
https://api.github.com/repos/openvstorage/alba
closed
Alba binary responds slow to calls
process_cantreproduce
Alba binary responds slow to calls in case of RDMA/RORA when there are a lot (20 or more) asds in a single backend. Observed a few times on KLONE and Samung POC. Pinning the ASDs does not help.
1.0
Alba binary responds slow to calls - Alba binary responds slow to calls in case of RDMA/RORA when there are a lot (20 or more) asds in a single backend. Observed a few times on KLONE and Samung POC. Pinning the ASDs does not help.
process
alba binary responds slow to calls alba binary responds slow to calls in case of rdma rora when there are a lot or more asds in a single backend observed a few times on klone and samung poc pinning the asds does not help
1
2,145
3,530,435,705
IssuesEvent
2016-01-15 00:05:00
cga-harvard/hypermap
https://api.github.com/repos/cga-harvard/hypermap
closed
Start writing unit tests and enable Travis CI
CI infrastructure
@ingenieroariel any good idea about what I could test at this point?
1.0
Start writing unit tests and enable Travis CI - @ingenieroariel any good idea about what I could test at this point?
non_process
start writing unit tests and enable travis ci ingenieroariel any good idea about what i could test at this point
0
65,178
16,129,425,881
IssuesEvent
2021-04-29 00:35:32
grpc/grpc
https://api.github.com/repos/grpc/grpc
closed
FLAKE: "pthread_mutex_lock failed" crash on MacOS (seen in many tests)
disposition/BUILDNURSE kind/bug lang/core priority/P2
I've seen this in a number of tests lately. The most recent one is here: https://source.cloud.google.com/results/invocations/34d2f278-60b4-40fc-b50c-0243d5463ec9/targets/github%2Fgrpc%2Frun_tests%2Fcpp_macos_dbg_native/tests Could this be related to the recent absl synchronization change?
1.0
FLAKE: "pthread_mutex_lock failed" crash on MacOS (seen in many tests) - I've seen this in a number of tests lately. The most recent one is here: https://source.cloud.google.com/results/invocations/34d2f278-60b4-40fc-b50c-0243d5463ec9/targets/github%2Fgrpc%2Frun_tests%2Fcpp_macos_dbg_native/tests Could this be related to the recent absl synchronization change?
non_process
flake pthread mutex lock failed crash on macos seen in many tests i ve seen this in a number of tests lately the most recent one is here could this be related to the recent absl synchronization change
0
177,608
21,480,209,378
IssuesEvent
2022-04-26 16:58:41
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
Document use of action.auto_create_index in relation to Shield
>enhancement >docs :Security/Authorization Team:Docs Team:Security
*Original comment by @nellicus:* Setting `action.auto_create_index: false` would prevent indices creation despite shield privilege `create_index` being granted. it'd be useful to have this documented in **shield documentation** where a config like this, would still prevent a index from being created: if you have: 1) index `abc` not existing 2) action.auto_create_index: false is set in elasticsearch.yml 3) roles.yml containing: ``` myrole: indices: - names: - 'abc' - privileges: ["create_index|,"index"] ``` when user with role `myrole` tries to index a document to document to index `abc`
True
Document use of action.auto_create_index in relation to Shield - *Original comment by @nellicus:* Setting `action.auto_create_index: false` would prevent indices creation despite shield privilege `create_index` being granted. it'd be useful to have this documented in **shield documentation** where a config like this, would still prevent a index from being created: if you have: 1) index `abc` not existing 2) action.auto_create_index: false is set in elasticsearch.yml 3) roles.yml containing: ``` myrole: indices: - names: - 'abc' - privileges: ["create_index|,"index"] ``` when user with role `myrole` tries to index a document to document to index `abc`
non_process
document use of action auto create index in relation to shield original comment by nellicus setting action auto create index false would prevent indices creation despite shield privilege create index being granted it d be useful to have this documented in shield documentation where a config like this would still prevent a index from being created if you have index abc not existing action auto create index false is set in elasticsearch yml roles yml containing myrole indices names abc privileges when user with role myrole tries to index a document to document to index abc
0
69,087
14,970,058,646
IssuesEvent
2021-01-27 19:02:58
jgeraigery/experian-java
https://api.github.com/repos/jgeraigery/experian-java
closed
CVE-2020-36181 (Medium) detected in jackson-databind-2.9.2.jar - autoclosed
security vulnerability
## CVE-2020-36181 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.2.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: experian-java/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.2/jackson-databind-2.9.2.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.2.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java/commit/e2b236143990842a0d83d97532011829192916a7">e2b236143990842a0d83d97532011829192916a7</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS. <p>Publish Date: 2021-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.2","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.2","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2020-36181","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181","cvss2Severity":"medium","cvss2Score":"6.8","extraData":{}}</REMEDIATE> -->
True
CVE-2020-36181 (Medium) detected in jackson-databind-2.9.2.jar - autoclosed - ## CVE-2020-36181 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.2.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: experian-java/MavenWorkspace/bis-services-lib/bis-services-base/pom.xml</p> <p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.2/jackson-databind-2.9.2.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.2.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/experian-java/commit/e2b236143990842a0d83d97532011829192916a7">e2b236143990842a0d83d97532011829192916a7</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS. <p>Publish Date: 2021-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181>CVE-2020-36181</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.2","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.2","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2020-36181","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp.cpdsadapter.DriverAdapterCPDS.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36181","cvss2Severity":"medium","cvss2Score":"6.8","extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in jackson databind jar autoclosed cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file experian java mavenworkspace bis services lib bis services base pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp cpdsadapter driveradaptercpds publish date url a href cvss score details base score metrics not available isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp dbcp cpdsadapter driveradaptercpds vulnerabilityurl
0
5,397
8,227,779,186
IssuesEvent
2018-09-07 01:03:50
pelias/pelias
https://api.github.com/repos/pelias/pelias
closed
Follow-up with update based on recent WOF changes
processed
Determine what if anything needs to be changed as a result of the work listed in this [WOF issue](https://github.com/whosonfirst/whosonfirst-properties/pull/35) Refer to @nvkelso's [comment here](https://github.com/whosonfirst-data/whosonfirst-data/pull/823#issuecomment-327340235)
1.0
Follow-up with update based on recent WOF changes - Determine what if anything needs to be changed as a result of the work listed in this [WOF issue](https://github.com/whosonfirst/whosonfirst-properties/pull/35) Refer to @nvkelso's [comment here](https://github.com/whosonfirst-data/whosonfirst-data/pull/823#issuecomment-327340235)
process
follow up with update based on recent wof changes determine what if anything needs to be changed as a result of the work listed in this refer to nvkelso s
1
104,772
22,749,902,475
IssuesEvent
2022-07-07 12:20:38
Onelinerhub/onelinerhub
https://api.github.com/repos/Onelinerhub/onelinerhub
closed
Short solution needed: "golang files" (golang)
help wanted good first issue code golang
Please help us write most modern and shortest code solution for this issue: **golang files** (technology: [golang](https://onelinerhub.com/golang)) ### Fast way Just write the code solution in the comments. ### Prefered way 1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox). 2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution. 3. Link to this issue in comments of pull request.
1.0
Short solution needed: "golang files" (golang) - Please help us write most modern and shortest code solution for this issue: **golang files** (technology: [golang](https://onelinerhub.com/golang)) ### Fast way Just write the code solution in the comments. ### Prefered way 1. Create [pull request](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md) with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox). 2. Don't forget to [use comments](https://github.com/Onelinerhub/onelinerhub/blob/main/how-to-contribute.md#code-file-md-format) explain solution. 3. Link to this issue in comments of pull request.
non_process
short solution needed golang files golang please help us write most modern and shortest code solution for this issue golang files technology fast way just write the code solution in the comments prefered way create with a new code file inside don t forget to explain solution link to this issue in comments of pull request
0
13,402
15,874,816,280
IssuesEvent
2021-04-09 05:53:39
googleapis/python-pubsub
https://api.github.com/repos/googleapis/python-pubsub
closed
DeprecationWarnings in unit tests should be trapped / asserted
api: pubsub type: process
```bash $ git log -1 commit 469ebaa3c449c881089dfc657da5902c1d031803 (HEAD -> master, origin/master, origin/HEAD) Author: Peter Lamut <plamut@users.noreply.github.com> Date: Fri Apr 2 09:26:10 2021 +0200 chore: regenerate GAPIC layer with latest changes (#345) $ nox -e unit-3.8 nox > Running session unit-3.8 nox > Creating virtual environment (virtualenv) using python3.8 in .nox/unit-3-8 nox > pip install asyncmock pytest-asyncio nox > pip install mock pytest pytest-cov nox > pip install -e . nox > py.test --quiet --junitxml=unit_3.8_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 8%] ........................................................................ [ 17%] ........................................................................ [ 26%] ........................................................................ [ 34%] ........................................................................ [ 43%] ........................................................................ [ 52%] ........................................................................ [ 61%] ........................................................................ [ 69%] ........................................................................ [ 78%] ........................................................................ [ 87%] ........................................................................ [ 96%] ............................... [100%] =============================== warnings summary =============================== tests/unit/gapic/pubsub_v1/test_subscriber.py::test_pull_flattened /home/tseaver/projects/agendaless/Google/src/python-pubsub/google/pubsub_v1/services/subscriber/client.py:1129: DeprecationWarning: The return_immediately flag is deprecated and should be set to False. warnings.warn( tests/unit/gapic/pubsub_v1/test_subscriber.py::test_pull_flattened_async /home/tseaver/projects/agendaless/Google/src/python-pubsub/google/pubsub_v1/services/subscriber/async_client.py:947: DeprecationWarning: The return_immediately flag is deprecated and should be set to False. warnings.warn( ... ``` Warnings should be trapped and checked (e.g., see [`test_sync_pull_warning_if_return_immediately`](https://github.com/googleapis/python-pubsub/blob/469ebaa3c449c881089dfc657da5902c1d031803/tests/unit/pubsub_v1/subscriber/test_subscriber_client.py#L229-L239)).
1.0
DeprecationWarnings in unit tests should be trapped / asserted - ```bash $ git log -1 commit 469ebaa3c449c881089dfc657da5902c1d031803 (HEAD -> master, origin/master, origin/HEAD) Author: Peter Lamut <plamut@users.noreply.github.com> Date: Fri Apr 2 09:26:10 2021 +0200 chore: regenerate GAPIC layer with latest changes (#345) $ nox -e unit-3.8 nox > Running session unit-3.8 nox > Creating virtual environment (virtualenv) using python3.8 in .nox/unit-3-8 nox > pip install asyncmock pytest-asyncio nox > pip install mock pytest pytest-cov nox > pip install -e . nox > py.test --quiet --junitxml=unit_3.8_sponge_log.xml --cov=google/cloud --cov=tests/unit --cov-append --cov-config=.coveragerc --cov-report= --cov-fail-under=0 tests/unit ........................................................................ [ 8%] ........................................................................ [ 17%] ........................................................................ [ 26%] ........................................................................ [ 34%] ........................................................................ [ 43%] ........................................................................ [ 52%] ........................................................................ [ 61%] ........................................................................ [ 69%] ........................................................................ [ 78%] ........................................................................ [ 87%] ........................................................................ [ 96%] ............................... [100%] =============================== warnings summary =============================== tests/unit/gapic/pubsub_v1/test_subscriber.py::test_pull_flattened /home/tseaver/projects/agendaless/Google/src/python-pubsub/google/pubsub_v1/services/subscriber/client.py:1129: DeprecationWarning: The return_immediately flag is deprecated and should be set to False. warnings.warn( tests/unit/gapic/pubsub_v1/test_subscriber.py::test_pull_flattened_async /home/tseaver/projects/agendaless/Google/src/python-pubsub/google/pubsub_v1/services/subscriber/async_client.py:947: DeprecationWarning: The return_immediately flag is deprecated and should be set to False. warnings.warn( ... ``` Warnings should be trapped and checked (e.g., see [`test_sync_pull_warning_if_return_immediately`](https://github.com/googleapis/python-pubsub/blob/469ebaa3c449c881089dfc657da5902c1d031803/tests/unit/pubsub_v1/subscriber/test_subscriber_client.py#L229-L239)).
process
deprecationwarnings in unit tests should be trapped asserted bash git log commit head master origin master origin head author peter lamut date fri apr chore regenerate gapic layer with latest changes nox e unit nox running session unit nox creating virtual environment virtualenv using in nox unit nox pip install asyncmock pytest asyncio nox pip install mock pytest pytest cov nox pip install e nox py test quiet junitxml unit sponge log xml cov google cloud cov tests unit cov append cov config coveragerc cov report cov fail under tests unit warnings summary tests unit gapic pubsub test subscriber py test pull flattened home tseaver projects agendaless google src python pubsub google pubsub services subscriber client py deprecationwarning the return immediately flag is deprecated and should be set to false warnings warn tests unit gapic pubsub test subscriber py test pull flattened async home tseaver projects agendaless google src python pubsub google pubsub services subscriber async client py deprecationwarning the return immediately flag is deprecated and should be set to false warnings warn warnings should be trapped and checked e g see
1
7,434
10,550,283,062
IssuesEvent
2019-10-03 10:41:00
codacy/codacy-meta
https://api.github.com/repos/codacy/codacy-meta
closed
Open project checklist
Processes Tech
We want a checklist to ensure all our open source projects are setup the same way and respect the quality checks. Add PR to handbook describing the process gz#7542
1.0
Open project checklist - We want a checklist to ensure all our open source projects are setup the same way and respect the quality checks. Add PR to handbook describing the process gz#7542
process
open project checklist we want a checklist to ensure all our open source projects are setup the same way and respect the quality checks add pr to handbook describing the process gz
1
17,363
23,186,380,507
IssuesEvent
2022-08-01 08:45:39
streamnative/flink
https://api.github.com/repos/streamnative/flink
closed
[SQL Connector] default subscription name should have some randomness
compute/data-processing
Currently if multiple Flink tables are created against a Pulsar topic, by default they will use the same subscription name, and this can cause trouble as I my understanding they might use the same subscription. We should add a default random subscription name for the Pulsar users. Tests should be added that when multiple tables are created, the subscription they use are not the same,
1.0
[SQL Connector] default subscription name should have some randomness - Currently if multiple Flink tables are created against a Pulsar topic, by default they will use the same subscription name, and this can cause trouble as I my understanding they might use the same subscription. We should add a default random subscription name for the Pulsar users. Tests should be added that when multiple tables are created, the subscription they use are not the same,
process
default subscription name should have some randomness currently if multiple flink tables are created against a pulsar topic by default they will use the same subscription name and this can cause trouble as i my understanding they might use the same subscription we should add a default random subscription name for the pulsar users tests should be added that when multiple tables are created the subscription they use are not the same
1
787,613
27,724,533,623
IssuesEvent
2023-03-15 00:22:39
AlphaWallet/alpha-wallet-ios
https://api.github.com/repos/AlphaWallet/alpha-wallet-ios
closed
Fix error type implementation so handling them is less fragile
High Priority
This switch statement in particular is very fragile: https://github.com/AlphaWallet/alpha-wallet-ios/blob/2dccbef3f3e10a637c61e2914fdc12c9230afc05/AlphaWallet/Common/Types/Error.swift#L78 A. So the right way is to implement `errorDescription`, not `localizedDescription`. It is important to *not* implement `localizedDescription` at all. e.g. Replace: ``` public var localizedDescription: String { switch self { case .unableToBuildSwapUnsignedTransaction(let message): return "Unable To Build Swap Unsigned Transaction: \(message)" ``` with: ``` public var errorDescription: String? { switch self { case .unableToBuildSwapUnsignedTransaction(let message): return "Unable To Build Swap Unsigned Transaction: \(message)" ``` B. That makes extracting the error message when giving an `Error` much simpler and robust (this is in `Error.swift`): ``` public var prettyError: String { return localizedDescription //Making `prettyError` unnecessary } ``` A standalone example: https://gist.github.com/hboon/5c8e258aaedfc896688d473a4cabee79
1.0
Fix error type implementation so handling them is less fragile - This switch statement in particular is very fragile: https://github.com/AlphaWallet/alpha-wallet-ios/blob/2dccbef3f3e10a637c61e2914fdc12c9230afc05/AlphaWallet/Common/Types/Error.swift#L78 A. So the right way is to implement `errorDescription`, not `localizedDescription`. It is important to *not* implement `localizedDescription` at all. e.g. Replace: ``` public var localizedDescription: String { switch self { case .unableToBuildSwapUnsignedTransaction(let message): return "Unable To Build Swap Unsigned Transaction: \(message)" ``` with: ``` public var errorDescription: String? { switch self { case .unableToBuildSwapUnsignedTransaction(let message): return "Unable To Build Swap Unsigned Transaction: \(message)" ``` B. That makes extracting the error message when giving an `Error` much simpler and robust (this is in `Error.swift`): ``` public var prettyError: String { return localizedDescription //Making `prettyError` unnecessary } ``` A standalone example: https://gist.github.com/hboon/5c8e258aaedfc896688d473a4cabee79
non_process
fix error type implementation so handling them is less fragile this switch statement in particular is very fragile a so the right way is to implement errordescription not localizeddescription it is important to not implement localizeddescription at all e g replace public var localizeddescription string switch self case unabletobuildswapunsignedtransaction let message return unable to build swap unsigned transaction message with public var errordescription string switch self case unabletobuildswapunsignedtransaction let message return unable to build swap unsigned transaction message b that makes extracting the error message when giving an error much simpler and robust this is in error swift public var prettyerror string return localizeddescription making prettyerror unnecessary a standalone example
0
138,213
18,771,758,811
IssuesEvent
2021-11-07 00:12:38
samqws-marketing/fico-xpress_vdlx-datagrid
https://api.github.com/repos/samqws-marketing/fico-xpress_vdlx-datagrid
opened
CVE-2021-3807 (High) detected in multiple libraries
security vulnerability
## CVE-2021-3807 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ansi-regex-4.1.0.tgz</b>, <b>ansi-regex-5.0.0.tgz</b>, <b>ansi-regex-3.0.0.tgz</b></p></summary> <p> <details><summary><b>ansi-regex-4.1.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz</a></p> <p>Path to dependency file: fico-xpress_vdlx-datagrid/package.json</p> <p>Path to vulnerable library: fico-xpress_vdlx-datagrid/node_modules/purgecss/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/string-width/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/cliui/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/wrap-ansi/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - parcel-bundler-1.12.4.tgz (Root Library) - htmlnano-0.2.5.tgz - purgecss-1.4.1.tgz - yargs-14.2.0.tgz - string-width-3.1.0.tgz - strip-ansi-5.2.0.tgz - :x: **ansi-regex-4.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>ansi-regex-5.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p> <p>Path to dependency file: fico-xpress_vdlx-datagrid/package.json</p> <p>Path to vulnerable library: fico-xpress_vdlx-datagrid/node_modules/jest-each/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-validate/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-leak-detector/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-matcher-utils/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-config/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-runtime/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/@jest/core/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/pretty-format/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/string-length/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-jasmine2/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-snapshot/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - jest-26.0.1.tgz (Root Library) - jest-cli-26.0.1.tgz - yargs-15.3.1.tgz - cliui-6.0.0.tgz - strip-ansi-6.0.0.tgz - :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library) </details> <details><summary><b>ansi-regex-3.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz</a></p> <p>Path to dependency file: fico-xpress_vdlx-datagrid/package.json</p> <p>Path to vulnerable library: fico-xpress_vdlx-datagrid/node_modules/strip-ansi/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - parcel-bundler-1.12.4.tgz (Root Library) - logger-1.11.1.tgz - strip-ansi-4.0.0.tgz - :x: **ansi-regex-3.0.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/fico-xpress_vdlx-datagrid/commit/1034e9edaadc6cb260836b29dab13197a606790b">1034e9edaadc6cb260836b29dab13197a606790b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ansi-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"4.1.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"parcel-bundler:1.12.4;htmlnano:0.2.5;purgecss:1.4.1;yargs:14.2.0;string-width:3.1.0;strip-ansi:5.2.0;ansi-regex:4.1.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"},{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"5.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"jest:26.0.1;jest-cli:26.0.1;yargs:15.3.1;cliui:6.0.0;strip-ansi:6.0.0;ansi-regex:5.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"},{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"3.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"parcel-bundler:1.12.4;@parcel/logger:1.11.1;strip-ansi:4.0.0;ansi-regex:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3807","vulnerabilityDetails":"ansi-regex is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-3807 (High) detected in multiple libraries - ## CVE-2021-3807 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ansi-regex-4.1.0.tgz</b>, <b>ansi-regex-5.0.0.tgz</b>, <b>ansi-regex-3.0.0.tgz</b></p></summary> <p> <details><summary><b>ansi-regex-4.1.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-4.1.0.tgz</a></p> <p>Path to dependency file: fico-xpress_vdlx-datagrid/package.json</p> <p>Path to vulnerable library: fico-xpress_vdlx-datagrid/node_modules/purgecss/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/string-width/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/cliui/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/wrap-ansi/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - parcel-bundler-1.12.4.tgz (Root Library) - htmlnano-0.2.5.tgz - purgecss-1.4.1.tgz - yargs-14.2.0.tgz - string-width-3.1.0.tgz - strip-ansi-5.2.0.tgz - :x: **ansi-regex-4.1.0.tgz** (Vulnerable Library) </details> <details><summary><b>ansi-regex-5.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p> <p>Path to dependency file: fico-xpress_vdlx-datagrid/package.json</p> <p>Path to vulnerable library: fico-xpress_vdlx-datagrid/node_modules/jest-each/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-validate/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-leak-detector/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-matcher-utils/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-config/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-runtime/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/@jest/core/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/pretty-format/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/string-length/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-jasmine2/node_modules/ansi-regex/package.json,fico-xpress_vdlx-datagrid/node_modules/jest-snapshot/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - jest-26.0.1.tgz (Root Library) - jest-cli-26.0.1.tgz - yargs-15.3.1.tgz - cliui-6.0.0.tgz - strip-ansi-6.0.0.tgz - :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library) </details> <details><summary><b>ansi-regex-3.0.0.tgz</b></p></summary> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-3.0.0.tgz</a></p> <p>Path to dependency file: fico-xpress_vdlx-datagrid/package.json</p> <p>Path to vulnerable library: fico-xpress_vdlx-datagrid/node_modules/strip-ansi/node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - parcel-bundler-1.12.4.tgz (Root Library) - logger-1.11.1.tgz - strip-ansi-4.0.0.tgz - :x: **ansi-regex-3.0.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/fico-xpress_vdlx-datagrid/commit/1034e9edaadc6cb260836b29dab13197a606790b">1034e9edaadc6cb260836b29dab13197a606790b</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ansi-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807>CVE-2021-3807</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution: ansi-regex - 5.0.1,6.0.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"4.1.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"parcel-bundler:1.12.4;htmlnano:0.2.5;purgecss:1.4.1;yargs:14.2.0;string-width:3.1.0;strip-ansi:5.2.0;ansi-regex:4.1.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"},{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"5.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"jest:26.0.1;jest-cli:26.0.1;yargs:15.3.1;cliui:6.0.0;strip-ansi:6.0.0;ansi-regex:5.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"},{"packageType":"javascript/Node.js","packageName":"ansi-regex","packageVersion":"3.0.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"parcel-bundler:1.12.4;@parcel/logger:1.11.1;strip-ansi:4.0.0;ansi-regex:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"ansi-regex - 5.0.1,6.0.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3807","vulnerabilityDetails":"ansi-regex is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3807","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries ansi regex tgz ansi regex tgz ansi regex tgz ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file fico xpress vdlx datagrid package json path to vulnerable library fico xpress vdlx datagrid node modules purgecss node modules ansi regex package json fico xpress vdlx datagrid node modules string width node modules ansi regex package json fico xpress vdlx datagrid node modules cliui node modules ansi regex package json fico xpress vdlx datagrid node modules wrap ansi node modules ansi regex package json dependency hierarchy parcel bundler tgz root library htmlnano tgz purgecss tgz yargs tgz string width tgz strip ansi tgz x ansi regex tgz vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file fico xpress vdlx datagrid package json path to vulnerable library fico xpress vdlx datagrid node modules jest each node modules ansi regex package json fico xpress vdlx datagrid node modules jest validate node modules ansi regex package json fico xpress vdlx datagrid node modules jest leak detector node modules ansi regex package json fico xpress vdlx datagrid node modules jest matcher utils node modules ansi regex package json fico xpress vdlx datagrid node modules jest config node modules ansi regex package json fico xpress vdlx datagrid node modules jest runtime node modules ansi regex package json fico xpress vdlx datagrid node modules jest core node modules ansi regex package json fico xpress vdlx datagrid node modules pretty format node modules ansi regex package json fico xpress vdlx datagrid node modules jest node modules ansi regex package json fico xpress vdlx datagrid node modules string length node modules ansi regex package json fico xpress vdlx datagrid node modules jest node modules ansi regex package json fico xpress vdlx datagrid node modules jest snapshot node modules ansi regex package json dependency hierarchy jest tgz root library jest cli tgz yargs tgz cliui tgz strip ansi tgz x ansi regex tgz vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file fico xpress vdlx datagrid package json path to vulnerable library fico xpress vdlx datagrid node modules strip ansi node modules ansi regex package json dependency hierarchy parcel bundler tgz root library logger tgz strip ansi tgz x ansi regex tgz vulnerable library found in head commit a href found in base branch master vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree parcel bundler htmlnano purgecss yargs string width strip ansi ansi regex isminimumfixversionavailable true minimumfixversion ansi regex packagetype javascript node js packagename ansi regex packageversion packagefilepaths istransitivedependency true dependencytree jest jest cli yargs cliui strip ansi ansi regex isminimumfixversionavailable true minimumfixversion ansi regex packagetype javascript node js packagename ansi regex packageversion packagefilepaths istransitivedependency true dependencytree parcel bundler parcel logger strip ansi ansi regex isminimumfixversionavailable true minimumfixversion ansi regex basebranches vulnerabilityidentifier cve vulnerabilitydetails ansi regex is vulnerable to inefficient regular expression complexity vulnerabilityurl
0
411,091
12,014,430,646
IssuesEvent
2020-04-10 11:25:51
Yalantis/PullToRefresh
https://api.github.com/repos/Yalantis/PullToRefresh
closed
The opposite refresher is missing
priority: low status: reviewed status: valid type: bug
# Report > It can be reproduced in PullToRefreshDemo.xcodeproj ## Report a bug ### What did you do? > Pull down the table view, and don't fire the action of the top refresher, and hold the finger. > Scroll up the table view rapidly. > Go to the bottom of the table view. ### What did you expect to happen? > The bottom refresher is existing. ### What happened instead? > The bottom refresher is missing. ### Your Environment - Version of the component: 3.1.0 - Swift version: 4.2 - iOS version: iOS 12.0.1 - Device: iPhone X - Xcode version: 10.0 (10A255) - CocoaPods: 1.5.3 ### Project that demonstrates the bug > The PullToRefreshDemo.xcodeproj can reproduce it. ### Advice In the `override open func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?)` of `PullToRefresher.swift`, I change some code in the first `if` part: In the offset switch, maybe PullToRefresh need handle the `offset > 0` case: ` if state != .loading && state != .finished && state != .initial { state = .initial }` Maybe it is a workaround.
1.0
The opposite refresher is missing - # Report > It can be reproduced in PullToRefreshDemo.xcodeproj ## Report a bug ### What did you do? > Pull down the table view, and don't fire the action of the top refresher, and hold the finger. > Scroll up the table view rapidly. > Go to the bottom of the table view. ### What did you expect to happen? > The bottom refresher is existing. ### What happened instead? > The bottom refresher is missing. ### Your Environment - Version of the component: 3.1.0 - Swift version: 4.2 - iOS version: iOS 12.0.1 - Device: iPhone X - Xcode version: 10.0 (10A255) - CocoaPods: 1.5.3 ### Project that demonstrates the bug > The PullToRefreshDemo.xcodeproj can reproduce it. ### Advice In the `override open func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?)` of `PullToRefresher.swift`, I change some code in the first `if` part: In the offset switch, maybe PullToRefresh need handle the `offset > 0` case: ` if state != .loading && state != .finished && state != .initial { state = .initial }` Maybe it is a workaround.
non_process
the opposite refresher is missing report it can be reproduced in pulltorefreshdemo xcodeproj report a bug what did you do pull down the table view and don t fire the action of the top refresher and hold the finger scroll up the table view rapidly go to the bottom of the table view what did you expect to happen the bottom refresher is existing what happened instead the bottom refresher is missing your environment version of the component swift version ios version ios device iphone x xcode version cocoapods project that demonstrates the bug the pulltorefreshdemo xcodeproj can reproduce it advice in the override open func observevalue forkeypath keypath string of object any change context unsafemutablerawpointer of pulltorefresher swift i change some code in the first if part in the offset switch maybe pulltorefresh need handle the offset case if state loading state finished state initial state initial maybe it is a workaround
0
471,510
13,578,707,277
IssuesEvent
2020-09-20 09:14:11
shahednasser/sbuttons
https://api.github.com/repos/shahednasser/sbuttons
closed
Add black button
Priority: Medium buttons good first issue help wanted up-for-grabs
Add to the available colors a black button. If there's an issue with how it looks like on dark mode, you can either add a border or box shadow, it's up to you. When you add the color in variables in `src/sbuttons.less` and the style rules in `src/components/_basic.less` Make sure to add the new color in `assets/js/buttons-examples.js` to show it on the website. Please follow the same convention as previous colors
1.0
Add black button - Add to the available colors a black button. If there's an issue with how it looks like on dark mode, you can either add a border or box shadow, it's up to you. When you add the color in variables in `src/sbuttons.less` and the style rules in `src/components/_basic.less` Make sure to add the new color in `assets/js/buttons-examples.js` to show it on the website. Please follow the same convention as previous colors
non_process
add black button add to the available colors a black button if there s an issue with how it looks like on dark mode you can either add a border or box shadow it s up to you when you add the color in variables in src sbuttons less and the style rules in src components basic less make sure to add the new color in assets js buttons examples js to show it on the website please follow the same convention as previous colors
0
17,527
23,340,845,730
IssuesEvent
2022-08-09 13:55:29
deepset-ai/haystack
https://api.github.com/repos/deepset-ai/haystack
closed
Add page information to Document metadata when converting PDF files
type:feature Contributions wanted! topic:file_converter good second issue topic:preprocessing
**Is your feature request related to a problem? Please describe.** When splitting long PDF documents into smaller Haystack Documents I might want to know on which page of the original PDF the text is. The feature is also somewhat related to https://github.com/deepset-ai/haystack/issues/1373 since when we have the page in the original document we can also more easily match the answer. **Describe the solution you'd like** I see the PDF converters already give out page info, like in Haystacks PDFToTextConverter `pages = self._read_pdf(file_path, layout=False, encoding=encoding)` Maybe we can propagate this page info somehow to the preprocessor that splits texts into Documents? It would also be good to have page information added to tables coming back from Parsr.
1.0
Add page information to Document metadata when converting PDF files - **Is your feature request related to a problem? Please describe.** When splitting long PDF documents into smaller Haystack Documents I might want to know on which page of the original PDF the text is. The feature is also somewhat related to https://github.com/deepset-ai/haystack/issues/1373 since when we have the page in the original document we can also more easily match the answer. **Describe the solution you'd like** I see the PDF converters already give out page info, like in Haystacks PDFToTextConverter `pages = self._read_pdf(file_path, layout=False, encoding=encoding)` Maybe we can propagate this page info somehow to the preprocessor that splits texts into Documents? It would also be good to have page information added to tables coming back from Parsr.
process
add page information to document metadata when converting pdf files is your feature request related to a problem please describe when splitting long pdf documents into smaller haystack documents i might want to know on which page of the original pdf the text is the feature is also somewhat related to since when we have the page in the original document we can also more easily match the answer describe the solution you d like i see the pdf converters already give out page info like in haystacks pdftotextconverter pages self read pdf file path layout false encoding encoding maybe we can propagate this page info somehow to the preprocessor that splits texts into documents it would also be good to have page information added to tables coming back from parsr
1
2,583
5,344,278,437
IssuesEvent
2017-02-17 14:06:15
NJDaeger/EssentialCommands
https://api.github.com/repos/NJDaeger/EssentialCommands
closed
NPE when doing anything with player config.
bug in process
In process of fixing this. Won't be fixed until at least this Saturday.
1.0
NPE when doing anything with player config. - In process of fixing this. Won't be fixed until at least this Saturday.
process
npe when doing anything with player config in process of fixing this won t be fixed until at least this saturday
1
30,581
8,563,151,811
IssuesEvent
2018-11-09 13:09:12
sphinx-doc/sphinx
https://api.github.com/repos/sphinx-doc/sphinx
closed
texinfo indentation bug?
bug builder
when producing texinfo documentation from ``` .. RUBRIC:: Differences between ``CoordFunctionSymb`` and callable symbolic expressions Callable symbolic expressions are defined directly from symbolic ``` I obtain ``` @subsubheading Differences between @code{CoordFunctionSymb} and callable symbolic expressions Callable symbolic expressions are defined directly from symbolic ``` Is this a bug in sphinx or is there too little indentation after before `symbolic expressions`?
1.0
texinfo indentation bug? - when producing texinfo documentation from ``` .. RUBRIC:: Differences between ``CoordFunctionSymb`` and callable symbolic expressions Callable symbolic expressions are defined directly from symbolic ``` I obtain ``` @subsubheading Differences between @code{CoordFunctionSymb} and callable symbolic expressions Callable symbolic expressions are defined directly from symbolic ``` Is this a bug in sphinx or is there too little indentation after before `symbolic expressions`?
non_process
texinfo indentation bug when producing texinfo documentation from rubric differences between coordfunctionsymb and callable symbolic expressions callable symbolic expressions are defined directly from symbolic i obtain subsubheading differences between code coordfunctionsymb and callable symbolic expressions callable symbolic expressions are defined directly from symbolic is this a bug in sphinx or is there too little indentation after before symbolic expressions
0
356,112
10,588,977,421
IssuesEvent
2019-10-09 04:14:29
AY1920S1-CS2103T-T09-1/main
https://api.github.com/repos/AY1920S1-CS2103T-T09-1/main
opened
As a traveller, I want to categorise activities by interest
priority.Medium type.Story
so that I can prioritise certain activities
1.0
As a traveller, I want to categorise activities by interest - so that I can prioritise certain activities
non_process
as a traveller i want to categorise activities by interest so that i can prioritise certain activities
0
20,620
27,291,973,016
IssuesEvent
2023-02-23 17:14:03
scikit-learn/scikit-learn
https://api.github.com/repos/scikit-learn/scikit-learn
opened
OrdinalEncoder inconsistent with None and `np.nan` values
RFC module:preprocessing Needs Triage
OrdinalEncoder treats `None` and `nan` differently: ```python from sklearn.preprocessing import OrdinalEncoder import numpy as np enc = OrdinalEncoder() ## Case 1 enc.fit_transform([["dog"], ["cat"], [None]]) # array([[1.], # [0.], # [2.]]) ## Case 2 enc.fit_transform([["dog"], ["cat"], [np.nan]]) # array([[ 1.], # [ 0.], # [nan]]) ``` In case 1, `None` is treated as a category and encoded. In case 2, `np.nan` is passed through and not encoded. Note that, if `None` and `nan` appear, then `None` gets encoded and `np.nan` gets passed through: ```python enc.fit_transform([["dog"], ["cat"], [np.nan], [None]]) # array([[ 1.], # [ 0.], # [nan], # [ 2.]]) ``` We can interpret this as is a bug with `None`, which should encode `None` as `np.nan` because the default `encoded_missing_value=None`. Changing the behavior will break a lot of code because encoding `None` has been a feature before `encoded_missing_value` was introduced Functionally, I think it would be useful to configure `np.nan` to be it's own category, such as `encoded_missing_value="own_category"`, which will give `nan` the same behavior as `None`.
1.0
OrdinalEncoder inconsistent with None and `np.nan` values - OrdinalEncoder treats `None` and `nan` differently: ```python from sklearn.preprocessing import OrdinalEncoder import numpy as np enc = OrdinalEncoder() ## Case 1 enc.fit_transform([["dog"], ["cat"], [None]]) # array([[1.], # [0.], # [2.]]) ## Case 2 enc.fit_transform([["dog"], ["cat"], [np.nan]]) # array([[ 1.], # [ 0.], # [nan]]) ``` In case 1, `None` is treated as a category and encoded. In case 2, `np.nan` is passed through and not encoded. Note that, if `None` and `nan` appear, then `None` gets encoded and `np.nan` gets passed through: ```python enc.fit_transform([["dog"], ["cat"], [np.nan], [None]]) # array([[ 1.], # [ 0.], # [nan], # [ 2.]]) ``` We can interpret this as is a bug with `None`, which should encode `None` as `np.nan` because the default `encoded_missing_value=None`. Changing the behavior will break a lot of code because encoding `None` has been a feature before `encoded_missing_value` was introduced Functionally, I think it would be useful to configure `np.nan` to be it's own category, such as `encoded_missing_value="own_category"`, which will give `nan` the same behavior as `None`.
process
ordinalencoder inconsistent with none and np nan values ordinalencoder treats none and nan differently python from sklearn preprocessing import ordinalencoder import numpy as np enc ordinalencoder case enc fit transform array case enc fit transform array in case none is treated as a category and encoded in case np nan is passed through and not encoded note that if none and nan appear then none gets encoded and np nan gets passed through python enc fit transform array we can interpret this as is a bug with none which should encode none as np nan because the default encoded missing value none changing the behavior will break a lot of code because encoding none has been a feature before encoded missing value was introduced functionally i think it would be useful to configure np nan to be it s own category such as encoded missing value own category which will give nan the same behavior as none
1
13,098
15,495,239,381
IssuesEvent
2021-03-11 00:30:58
googleapis/nodejs-speech
https://api.github.com/repos/googleapis/nodejs-speech
closed
Synthesis failed for nodejs-speech
:rotating_light: api: speech autosynth failure flakybot: quiet triage me type: bug type: process
Hello! Autosynth couldn't regenerate nodejs-speech. :broken_heart: Here's the output from running `synth.py`: ``` quest = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListPhraseSetRequest());    ~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:982:92 - error TS2339: Property 'ListPhraseSetRequest' does not exist on type 'typeof v1p1beta1'. 982 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListPhraseSetRequest());    ~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1016:92 - error TS2339: Property 'ListPhraseSetRequest' does not exist on type 'typeof v1p1beta1'. 1016 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListPhraseSetRequest());    ~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1046:92 - error TS2339: Property 'ListPhraseSetRequest' does not exist on type 'typeof v1p1beta1'. 1046 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListPhraseSetRequest());    ~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1075:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1075 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1103:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1103 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1142:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1142 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1165:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1165 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1204:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1204 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1238:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1238 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1268:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1268 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ Found 154 errors. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google-cloud/speech@4.2.0 compile: `tsc -p . && cp system-test/*.js build/system-test/ && cp -r protos build/` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google-cloud/speech@4.2.0 compile script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2021-02-19T10_35_34_508Z-debug.log npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google-cloud/speech@4.2.0 prepare: `npm run compile` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google-cloud/speech@4.2.0 prepare script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2021-02-19T10_35_34_569Z-debug.log 2021-02-19 02:35:34,596 synthtool [ERROR] > Failed executing npm install: None ERROR:synthtool:Failed executing npm install: None Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module> main() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main spec.loader.exec_module(synth_module) # type: ignore File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/kbuilder/.cache/synthtool/nodejs-speech/synth.py", line 58, in <module> node.postprocess_gapic_library() File "/tmpfs/src/github/synthtool/synthtool/languages/node.py", line 226, in postprocess_gapic_library install(hide_output=hide_output) File "/tmpfs/src/github/synthtool/synthtool/languages/node.py", line 168, in install shell.run(["npm", "install"], hide_output=hide_output) File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run raise exc File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run encoding="utf-8", File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['npm', 'install']' returned non-zero exit status 1. 2021-02-19 02:35:34,642 autosynth [ERROR] > Synthesis failed 2021-02-19 02:35:34,642 autosynth [DEBUG] > Running: git reset --hard HEAD HEAD is now at 7bdd0b2 chore: update CODEOWNERS and proto fields (#699) 2021-02-19 02:35:34,652 autosynth [DEBUG] > Running: git checkout autosynth Switched to branch 'autosynth' 2021-02-19 02:35:34,659 autosynth [DEBUG] > Running: git clean -fdx Removing __pycache__/ Removing node_modules/ Removing protos/google/cloud/speech/v1p1beta1/cloud_speech_adaptation.proto Removing src/v1p1beta1/adaptation_client.ts Removing src/v1p1beta1/adaptation_client_config.json Removing src/v1p1beta1/adaptation_proto_list.json Removing test/gapic_adaptation_v1p1beta1.ts Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 354, in <module> main() File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 189, in main return _inner_main(temp_dir) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 334, in _inner_main commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 65, in synthesize_loop has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest) File "/tmpfs/src/github/synthtool/autosynth/synth_toolbox.py", line 259, in synthesize_version_in_new_branch synthesizer.synthesize(synth_log_path, self.environ) File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize synth_proc.check_returncode() # Raise an exception. File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode self.stderr) subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](http://sponge2/results/invocations/9c31242c-1e1d-415e-9cdd-10744a490d31/targets/github%2Fsynthtool;config=default/tests;query=nodejs-speech;failed=false).
1.0
Synthesis failed for nodejs-speech - Hello! Autosynth couldn't regenerate nodejs-speech. :broken_heart: Here's the output from running `synth.py`: ``` quest = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListPhraseSetRequest());    ~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:982:92 - error TS2339: Property 'ListPhraseSetRequest' does not exist on type 'typeof v1p1beta1'. 982 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListPhraseSetRequest());    ~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1016:92 - error TS2339: Property 'ListPhraseSetRequest' does not exist on type 'typeof v1p1beta1'. 1016 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListPhraseSetRequest());    ~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1046:92 - error TS2339: Property 'ListPhraseSetRequest' does not exist on type 'typeof v1p1beta1'. 1046 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListPhraseSetRequest());    ~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1075:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1075 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1103:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1103 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1142:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1142 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1165:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1165 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1204:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1204 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1238:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1238 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ test/gapic_adaptation_v1p1beta1.ts:1268:92 - error TS2339: Property 'ListCustomClassesRequest' does not exist on type 'typeof v1p1beta1'. 1268 const request = generateSampleMessage(new protos.google.cloud.speech.v1p1beta1.ListCustomClassesRequest());    ~~~~~~~~~~~~~~~~~~~~~~~~ Found 154 errors. npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google-cloud/speech@4.2.0 compile: `tsc -p . && cp system-test/*.js build/system-test/ && cp -r protos build/` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google-cloud/speech@4.2.0 compile script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2021-02-19T10_35_34_508Z-debug.log npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! @google-cloud/speech@4.2.0 prepare: `npm run compile` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the @google-cloud/speech@4.2.0 prepare script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/kbuilder/.npm/_logs/2021-02-19T10_35_34_569Z-debug.log 2021-02-19 02:35:34,596 synthtool [ERROR] > Failed executing npm install: None ERROR:synthtool:Failed executing npm install: None Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module> main() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main spec.loader.exec_module(synth_module) # type: ignore File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/kbuilder/.cache/synthtool/nodejs-speech/synth.py", line 58, in <module> node.postprocess_gapic_library() File "/tmpfs/src/github/synthtool/synthtool/languages/node.py", line 226, in postprocess_gapic_library install(hide_output=hide_output) File "/tmpfs/src/github/synthtool/synthtool/languages/node.py", line 168, in install shell.run(["npm", "install"], hide_output=hide_output) File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run raise exc File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run encoding="utf-8", File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['npm', 'install']' returned non-zero exit status 1. 2021-02-19 02:35:34,642 autosynth [ERROR] > Synthesis failed 2021-02-19 02:35:34,642 autosynth [DEBUG] > Running: git reset --hard HEAD HEAD is now at 7bdd0b2 chore: update CODEOWNERS and proto fields (#699) 2021-02-19 02:35:34,652 autosynth [DEBUG] > Running: git checkout autosynth Switched to branch 'autosynth' 2021-02-19 02:35:34,659 autosynth [DEBUG] > Running: git clean -fdx Removing __pycache__/ Removing node_modules/ Removing protos/google/cloud/speech/v1p1beta1/cloud_speech_adaptation.proto Removing src/v1p1beta1/adaptation_client.ts Removing src/v1p1beta1/adaptation_client_config.json Removing src/v1p1beta1/adaptation_proto_list.json Removing test/gapic_adaptation_v1p1beta1.ts Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 354, in <module> main() File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 189, in main return _inner_main(temp_dir) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 334, in _inner_main commit_count = synthesize_loop(x, multiple_prs, change_pusher, synthesizer) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 65, in synthesize_loop has_changes = toolbox.synthesize_version_in_new_branch(synthesizer, youngest) File "/tmpfs/src/github/synthtool/autosynth/synth_toolbox.py", line 259, in synthesize_version_in_new_branch synthesizer.synthesize(synth_log_path, self.environ) File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize synth_proc.check_returncode() # Raise an exception. File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode self.stderr) subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](http://sponge2/results/invocations/9c31242c-1e1d-415e-9cdd-10744a490d31/targets/github%2Fsynthtool;config=default/tests;query=nodejs-speech;failed=false).
process
synthesis failed for nodejs speech hello autosynth couldn t regenerate nodejs speech broken heart here s the output from running synth py quest generatesamplemessage new protos google cloud speech listphrasesetrequest      gapic adaptation ts         listphrasesetrequest does not exist on type typeof   const request generatesamplemessage new protos google cloud speech listphrasesetrequest      gapic adaptation ts         listphrasesetrequest does not exist on type typeof   const request generatesamplemessage new protos google cloud speech listphrasesetrequest      gapic adaptation ts         listphrasesetrequest does not exist on type typeof   const request generatesamplemessage new protos google cloud speech listphrasesetrequest      gapic adaptation ts         listcustomclassesrequest does not exist on type typeof   const request generatesamplemessage new protos google cloud speech listcustomclassesrequest      gapic adaptation ts         listcustomclassesrequest does not exist on type typeof   const request generatesamplemessage new protos google cloud speech listcustomclassesrequest      gapic adaptation ts         listcustomclassesrequest does not exist on type typeof   const request generatesamplemessage new protos google cloud speech listcustomclassesrequest      gapic adaptation ts         listcustomclassesrequest does not exist on type typeof   const request generatesamplemessage new protos google cloud speech listcustomclassesrequest      gapic adaptation ts         listcustomclassesrequest does not exist on type typeof   const request generatesamplemessage new protos google cloud speech listcustomclassesrequest      gapic adaptation ts         listcustomclassesrequest does not exist on type typeof   const request generatesamplemessage new protos google cloud speech listcustomclassesrequest      gapic adaptation ts         listcustomclassesrequest does not exist on type typeof   const request generatesamplemessage new protos google cloud speech listcustomclassesrequest     found errors npm err code elifecycle npm err errno npm err google cloud speech compile tsc p cp system test js build system test cp r protos build npm err exit status npm err npm err failed at the google cloud speech compile script npm err this is probably not a problem with npm there is likely additional logging output above npm err a complete log of this run can be found in npm err home kbuilder npm logs debug log npm err code elifecycle npm err errno npm err google cloud speech prepare npm run compile npm err exit status npm err npm err failed at the google cloud speech prepare script npm err this is probably not a problem with npm there is likely additional logging output above npm err a complete log of this run can be found in npm err home kbuilder npm logs debug log synthtool failed executing npm install none error synthtool failed executing npm install none traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file home kbuilder cache synthtool nodejs speech synth py line in node postprocess gapic library file tmpfs src github synthtool synthtool languages node py line in postprocess gapic library install hide output hide output file tmpfs src github synthtool synthtool languages node py line in install shell run hide output hide output file tmpfs src github synthtool synthtool shell py line in run raise exc file tmpfs src github synthtool synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status autosynth synthesis failed autosynth running git reset hard head head is now at chore update codeowners and proto fields autosynth running git checkout autosynth switched to branch autosynth autosynth running git clean fdx removing pycache removing node modules removing protos google cloud speech cloud speech adaptation proto removing src adaptation client ts removing src adaptation client config json removing src adaptation proto list json removing test gapic adaptation ts traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main commit count synthesize loop x multiple prs change pusher synthesizer file tmpfs src github synthtool autosynth synth py line in synthesize loop has changes toolbox synthesize version in new branch synthesizer youngest file tmpfs src github synthtool autosynth synth toolbox py line in synthesize version in new branch synthesizer synthesize synth log path self environ file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
1
19,234
5,827,853,234
IssuesEvent
2017-05-08 10:12:41
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
JPADatabase doesn't correctly generate tables for multi-schema entities
C: Code Generation P: Medium R: Fixed T: Defect
When entities are annotated with schema information, then the `JPADatabase` currently doesn't correctly generate the tables. For example: ```java @Entity @Table(schema = "schema1", name = "t_actor") public class JPAActor { @Id int id; @Column(name = "first_name") String firstName; @Column(name = "last_name") String lastName; @ManyToMany @JoinTable( schema = "schema2", name = "t_film_actor", joinColumns = @JoinColumn(name = "actor_id", referencedColumnName = "id"), inverseJoinColumns = @JoinColumn(name = "film_id", referencedColumnName = "id") ) Set<JPAFilm> films; } @Entity @Table(schema = "schema2", name = "t_film") public class JPAFilm { @Id public int id; @Column(name = "title") public String title; @ManyToMany @JoinTable( schema = "schema2", name = "t_film_actor", joinColumns = @JoinColumn(name = "film_id", referencedColumnName = "id"), inverseJoinColumns = @JoinColumn(name = "actor_id", referencedColumnName = "id") ) public Set<JPAActor> actors; } ``` This is probably due to some Hibernate misconfiguration (or bug), as Hibernate's `SchemaExport`, which is used behind the scenes by `JPADatabase` doesn't generate any non-public tables (nor the schemata) in the H2 in-memory database.
1.0
JPADatabase doesn't correctly generate tables for multi-schema entities - When entities are annotated with schema information, then the `JPADatabase` currently doesn't correctly generate the tables. For example: ```java @Entity @Table(schema = "schema1", name = "t_actor") public class JPAActor { @Id int id; @Column(name = "first_name") String firstName; @Column(name = "last_name") String lastName; @ManyToMany @JoinTable( schema = "schema2", name = "t_film_actor", joinColumns = @JoinColumn(name = "actor_id", referencedColumnName = "id"), inverseJoinColumns = @JoinColumn(name = "film_id", referencedColumnName = "id") ) Set<JPAFilm> films; } @Entity @Table(schema = "schema2", name = "t_film") public class JPAFilm { @Id public int id; @Column(name = "title") public String title; @ManyToMany @JoinTable( schema = "schema2", name = "t_film_actor", joinColumns = @JoinColumn(name = "film_id", referencedColumnName = "id"), inverseJoinColumns = @JoinColumn(name = "actor_id", referencedColumnName = "id") ) public Set<JPAActor> actors; } ``` This is probably due to some Hibernate misconfiguration (or bug), as Hibernate's `SchemaExport`, which is used behind the scenes by `JPADatabase` doesn't generate any non-public tables (nor the schemata) in the H2 in-memory database.
non_process
jpadatabase doesn t correctly generate tables for multi schema entities when entities are annotated with schema information then the jpadatabase currently doesn t correctly generate the tables for example java entity table schema name t actor public class jpaactor id int id column name first name string firstname column name last name string lastname manytomany jointable schema name t film actor joincolumns joincolumn name actor id referencedcolumnname id inversejoincolumns joincolumn name film id referencedcolumnname id set films entity table schema name t film public class jpafilm id public int id column name title public string title manytomany jointable schema name t film actor joincolumns joincolumn name film id referencedcolumnname id inversejoincolumns joincolumn name actor id referencedcolumnname id public set actors this is probably due to some hibernate misconfiguration or bug as hibernate s schemaexport which is used behind the scenes by jpadatabase doesn t generate any non public tables nor the schemata in the in memory database
0
27,159
4,047,755,148
IssuesEvent
2016-05-23 07:34:13
geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
https://api.github.com/repos/geetsisbac/WCVVENIXYFVIRBXH3BYTI6TE
closed
n7wX+d07qB4kXzx3PhiUcXjLMHhP5QCmxwKRBGK/b0balsBrhv0IoeXuAHfEO0277lCJAIUCSn4IpMtC1WDGejKniTgb5pHNr2pH4/b/bv3UzTU6vgoBMwcQ2xzFiG8nu8NMwLf1g+xS/i0g3j1r9ISBZQoGnOTMGROSw1rcqXg=
design
OdNdbfDQ3DW0RhQO05sfkXizbN/JuDkuM+seP7E//F215dgdJiq5eF42X90GvWW++9kBLP9S/Z9iyuYS6hJ2w/Bnr4uC0DgnTdvxGycxcb47a0CJXwaZxZhkzpYWGtVjC6T4pgJZY7xc4LppOVlG0n1sHuTw8BVcIpQXR6jSdLCXWqT5cvCXYMglntsD/EsJi9KnEXui0TFsR+G3n45XW/PoOSTxINy8qz42St/KUHLT1hAopLnTKUiAx+W/Advpxenc8LT+aNoE/CXXlO3l70fEzDbDvjlWUZYVn7qUdC7HtD/jeAojnryx5p9zvOl9vtJ8JC3TGTT6usZDHQnmW/QRK+oeYy4JJnR2N8OIHnPNR7lObEqBfh6FDhYpe0KeHKOunygyE/qbBqfAkK1RFyL+QcWEqIN7QazwFG3WjIX0ESvqHmMuCSZ0djfDiB5zB8UUcXQJ91XRguJ8KUS9x7ZpuLHKbSQPMYT4p9nJOB/3tLirilJeZP+NhhDCrNRiKYc1fUbuLCdZfDiC7prfYGCt3AKdBTNz4MUThctgsC/gaFiv9vAxxWVI+JYw90SfuJlZ0Ta2km7At45w3JjzQ8wlT2C1UsC0wwGTVIPpcRO7q6nwDB8p0PkAjI4JnKFlVdP2p29PCAAowy4i2sB/dVDOCMRgFlRvPwSuW+FHlYGhTJdqH+PtLhZWQd8hVk8Y57Lan5vwdrB8Z8S6Tvam+jJqBrFuBzL50TqBVQQCCo4g3Rd7I5B89mzoKtPpjgul/zl7W1peW88/VQBpSGhtPbyFpjo3gE7mvB7CfHOdJIQEzqBjxRRyIXEub+TybNmI
1.0
n7wX+d07qB4kXzx3PhiUcXjLMHhP5QCmxwKRBGK/b0balsBrhv0IoeXuAHfEO0277lCJAIUCSn4IpMtC1WDGejKniTgb5pHNr2pH4/b/bv3UzTU6vgoBMwcQ2xzFiG8nu8NMwLf1g+xS/i0g3j1r9ISBZQoGnOTMGROSw1rcqXg= - OdNdbfDQ3DW0RhQO05sfkXizbN/JuDkuM+seP7E//F215dgdJiq5eF42X90GvWW++9kBLP9S/Z9iyuYS6hJ2w/Bnr4uC0DgnTdvxGycxcb47a0CJXwaZxZhkzpYWGtVjC6T4pgJZY7xc4LppOVlG0n1sHuTw8BVcIpQXR6jSdLCXWqT5cvCXYMglntsD/EsJi9KnEXui0TFsR+G3n45XW/PoOSTxINy8qz42St/KUHLT1hAopLnTKUiAx+W/Advpxenc8LT+aNoE/CXXlO3l70fEzDbDvjlWUZYVn7qUdC7HtD/jeAojnryx5p9zvOl9vtJ8JC3TGTT6usZDHQnmW/QRK+oeYy4JJnR2N8OIHnPNR7lObEqBfh6FDhYpe0KeHKOunygyE/qbBqfAkK1RFyL+QcWEqIN7QazwFG3WjIX0ESvqHmMuCSZ0djfDiB5zB8UUcXQJ91XRguJ8KUS9x7ZpuLHKbSQPMYT4p9nJOB/3tLirilJeZP+NhhDCrNRiKYc1fUbuLCdZfDiC7prfYGCt3AKdBTNz4MUThctgsC/gaFiv9vAxxWVI+JYw90SfuJlZ0Ta2km7At45w3JjzQ8wlT2C1UsC0wwGTVIPpcRO7q6nwDB8p0PkAjI4JnKFlVdP2p29PCAAowy4i2sB/dVDOCMRgFlRvPwSuW+FHlYGhTJdqH+PtLhZWQd8hVk8Y57Lan5vwdrB8Z8S6Tvam+jJqBrFuBzL50TqBVQQCCo4g3Rd7I5B89mzoKtPpjgul/zl7W1peW88/VQBpSGhtPbyFpjo3gE7mvB7CfHOdJIQEzqBjxRRyIXEub+TybNmI
non_process
b xs judkum w anoe qrk dvdocmrgflrvpwsuw fhlyghtjdqh tybnmi
0
21,592
7,047,279,366
IssuesEvent
2018-01-02 12:42:42
ShaikASK/Testing
https://api.github.com/repos/ShaikASK/Testing
closed
Signup::Duplicate emails are triggered upon signing up
AskIT Build Version #2 Defect P2
Steps To Replicate : 1. Launch the url : http://192.168.1.197:9090/#/signup 2. Enter all the valid information in the fields 3. Click on save button 4. "Thanks you" page is displayed 5. Check the email configured in the system. Experienced Behavior: Observed that duplicate emails are triggered and a part of notification Expected Behavior: Ensure that there only one email should be triggered as a part of notification
1.0
Signup::Duplicate emails are triggered upon signing up - Steps To Replicate : 1. Launch the url : http://192.168.1.197:9090/#/signup 2. Enter all the valid information in the fields 3. Click on save button 4. "Thanks you" page is displayed 5. Check the email configured in the system. Experienced Behavior: Observed that duplicate emails are triggered and a part of notification Expected Behavior: Ensure that there only one email should be triggered as a part of notification
non_process
signup duplicate emails are triggered upon signing up steps to replicate launch the url enter all the valid information in the fields click on save button thanks you page is displayed check the email configured in the system experienced behavior observed that duplicate emails are triggered and a part of notification expected behavior ensure that there only one email should be triggered as a part of notification
0
18,595
24,570,354,492
IssuesEvent
2022-10-13 08:09:48
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Android] Issues with respect to push notifications in the mobile app
Bug P1 Android Process: Fixed Process: Tested QA Process: Tested dev
Issues with respect to push notifications in the mobile app, **Scenario1:** Add resources with anchor-date based period in the SB with 'Text for notifying participants about the new resource being available' field should not be configured in the resources screen and then Publish the study, **AR:** Blank push notifications are getting displayed in the mobile app **ER:** Push notifications should not get displayed in the mobile app ![android](https://user-images.githubusercontent.com/86007179/188459924-b5a445d6-8a2f-4265-bbe6-7f3533476b12.png) **Scenario2:** Add resources in the SB with 'Text for notifying participants about the new resource being available' field should be configured in the resources screen and then Publish the study, **AR:** Old push notifications are getting displayed again for resources when navigated to resources screen **ER:** Old push notifications should not get displayed again for resources when navigated to resources screen
3.0
[Android] Issues with respect to push notifications in the mobile app - Issues with respect to push notifications in the mobile app, **Scenario1:** Add resources with anchor-date based period in the SB with 'Text for notifying participants about the new resource being available' field should not be configured in the resources screen and then Publish the study, **AR:** Blank push notifications are getting displayed in the mobile app **ER:** Push notifications should not get displayed in the mobile app ![android](https://user-images.githubusercontent.com/86007179/188459924-b5a445d6-8a2f-4265-bbe6-7f3533476b12.png) **Scenario2:** Add resources in the SB with 'Text for notifying participants about the new resource being available' field should be configured in the resources screen and then Publish the study, **AR:** Old push notifications are getting displayed again for resources when navigated to resources screen **ER:** Old push notifications should not get displayed again for resources when navigated to resources screen
process
issues with respect to push notifications in the mobile app issues with respect to push notifications in the mobile app add resources with anchor date based period in the sb with text for notifying participants about the new resource being available field should not be configured in the resources screen and then publish the study ar blank push notifications are getting displayed in the mobile app er push notifications should not get displayed in the mobile app add resources in the sb with text for notifying participants about the new resource being available field should be configured in the resources screen and then publish the study ar old push notifications are getting displayed again for resources when navigated to resources screen er old push notifications should not get displayed again for resources when navigated to resources screen
1
311,990
9,541,425,261
IssuesEvent
2019-04-30 22:22:06
OpeningDesign/CTR
https://api.github.com/repos/OpeningDesign/CTR
closed
Make storage units on first floor in wasted space next to both staircases
1st º Priority
Make storage units on first floor in wasted space next to both staircases
1.0
Make storage units on first floor in wasted space next to both staircases - Make storage units on first floor in wasted space next to both staircases
non_process
make storage units on first floor in wasted space next to both staircases make storage units on first floor in wasted space next to both staircases
0
16,038
9,212,526,774
IssuesEvent
2019-03-10 01:41:32
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
dynamic_decode reports error under eager_execution ? “ValueError: The inequality of unknown TensorShapes is undefined."
comp:eager comp:ops stat:awaiting response type:bug/performance
- TensorFlow version (use command below): 1.2 I have encountered a weird problem when transforming a usual seq2seq code into eager execution mode. After changing the placeholder input to numpy array, by calling `tf.contrib.seq2seq.dynamic_decode(training_decoder,impute_finished=True,maximum_iterations=max_target_sequence_length)` gives error “ValueError: The inequality of unknown TensorShapes is undefined." Without activating eager execution error, everything is fine. **Code to reproduce the issue** The code requires two files from github: https://github.com/udacity/deep-learning/blob/master/seq2seq/data/letters_source.txt and https://github.com/udacity/deep-learning/tree/master/seq2seq/data/letters_target.txt ``` import tensorflow as tf import tensorflow.contrib.eager as tfe tfe.enable_eager_execution() from distutils.version import LooseVersion from tensorflow.python.layers.core import Dense assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) import numpy as np import time import tensorflow as tf with open('data/letters_source.txt', 'r', encoding='utf-8') as f: source_data = f.read() with open('data/letters_target.txt', 'r', encoding='utf-8') as f: target_data = f.read() def extract_character_vocab(data): special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>'] set_words = list(set([character for line in data.split('\n') for character in line])) int_to_vocab = {idx: word for idx, word in enumerate(special_words + set_words)} vocab_to_int = {word: idx for idx, word in int_to_vocab.items()} return int_to_vocab, vocab_to_int source_int_to_letter, source_letter_to_int = extract_character_vocab(source_data) target_int_to_letter, target_letter_to_int = extract_character_vocab(target_data) source_int = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_data.split('\n')] target_int = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_data.split('\n')] def get_batches(targets, sources, batch_size, source_pad_int, target_pad_int): for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths def pad_sentence_batch(sentence_batch, pad_int): max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] display_step = 50 # 每隔50轮输出loss epochs = 60 batch_size = 128 rnn_size = 50 num_layers = 2 encoding_embedding_size = 15 decoding_embedding_size = 15 learning_rate = 0.001 checkpoint = "trained_model.ckpt" train_source = source_int[batch_size:] train_target = target_int[batch_size:] valid_source = source_int[:batch_size] valid_target = target_int[:batch_size] (valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size, source_letter_to_int['<PAD>'],target_letter_to_int['<PAD>'])) sess = tf.InteractiveSession() tf.global_variables_initializer() epoch_i = 1 batch_i = 0 (targets_batch, sources_batch, targets_lengths, sources_lengths) = next(get_batches(train_target, train_source, batch_size,source_letter_to_int['<PAD>'],target_letter_to_int['<PAD>'])) input_data = sources_batch targets = targets_batch lr = learning_rate target_sequence_length = targets_lengths source_sequence_length= sources_lengths max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len') #Encoder source_vocab_size = len(source_letter_to_int) target_vocab_size = len(target_letter_to_int) def get_lstm_cell(rnn_size): lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return lstm_cell encoder_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size) cell = tf.contrib.rnn.MultiRNNCell([get_lstm_cell(rnn_size) for _ in range(num_layers)]) encoder_output, encoder_state = tf.nn.dynamic_rnn(cell, encoder_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) ending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1]) decoder_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<GO>']), ending], 1) target_vocab_size = len(target_letter_to_int) decoder_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) decoder_embed_input = tf.nn.embedding_lookup(decoder_embeddings, decoder_input) def get_decoder_cell(rnn_size): decoder_cell = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return decoder_cell cell = tf.contrib.rnn.MultiRNNCell([get_decoder_cell(rnn_size) for _ in range(num_layers)]) output_layer = Dense(target_vocab_size,kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope("decode"): training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=decoder_embed_input,sequence_length=target_sequence_length,time_major=False) training_decoder = tf.contrib.seq2seq.BasicDecoder(cell,training_helper,encoder_state,output_layer) training_decoder_output, _,_ = tf.contrib.seq2seq.dynamic_decode(training_decoder,impute_finished=True,maximum_iterations=max_target_sequence_length) with tf.variable_scope("decode", reuse=True): start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(decoder_embeddings,start_tokens,target_letter_to_int['<EOS>']) predicting_decoder = tf.contrib.seq2seq.BasicDecoder(cell,predicting_helper,encoder_state,output_layer) predicting_decoder_output, _,_ = tf.contrib.seq2seq.dynamic_decode(predicting_decoder,impute_finished=True,maximum_iterations=max_target_sequence_length) ``` [letters_source.txt](https://github.com/tensorflow/tensorflow/files/2823741/letters_source.txt) [letters_target.txt](https://github.com/tensorflow/tensorflow/files/2823742/letters_target.txt)
True
dynamic_decode reports error under eager_execution ? “ValueError: The inequality of unknown TensorShapes is undefined." - - TensorFlow version (use command below): 1.2 I have encountered a weird problem when transforming a usual seq2seq code into eager execution mode. After changing the placeholder input to numpy array, by calling `tf.contrib.seq2seq.dynamic_decode(training_decoder,impute_finished=True,maximum_iterations=max_target_sequence_length)` gives error “ValueError: The inequality of unknown TensorShapes is undefined." Without activating eager execution error, everything is fine. **Code to reproduce the issue** The code requires two files from github: https://github.com/udacity/deep-learning/blob/master/seq2seq/data/letters_source.txt and https://github.com/udacity/deep-learning/tree/master/seq2seq/data/letters_target.txt ``` import tensorflow as tf import tensorflow.contrib.eager as tfe tfe.enable_eager_execution() from distutils.version import LooseVersion from tensorflow.python.layers.core import Dense assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) import numpy as np import time import tensorflow as tf with open('data/letters_source.txt', 'r', encoding='utf-8') as f: source_data = f.read() with open('data/letters_target.txt', 'r', encoding='utf-8') as f: target_data = f.read() def extract_character_vocab(data): special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>'] set_words = list(set([character for line in data.split('\n') for character in line])) int_to_vocab = {idx: word for idx, word in enumerate(special_words + set_words)} vocab_to_int = {word: idx for idx, word in int_to_vocab.items()} return int_to_vocab, vocab_to_int source_int_to_letter, source_letter_to_int = extract_character_vocab(source_data) target_int_to_letter, target_letter_to_int = extract_character_vocab(target_data) source_int = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_data.split('\n')] target_int = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_data.split('\n')] def get_batches(targets, sources, batch_size, source_pad_int, target_pad_int): for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths def pad_sentence_batch(sentence_batch, pad_int): max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] display_step = 50 # 每隔50轮输出loss epochs = 60 batch_size = 128 rnn_size = 50 num_layers = 2 encoding_embedding_size = 15 decoding_embedding_size = 15 learning_rate = 0.001 checkpoint = "trained_model.ckpt" train_source = source_int[batch_size:] train_target = target_int[batch_size:] valid_source = source_int[:batch_size] valid_target = target_int[:batch_size] (valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size, source_letter_to_int['<PAD>'],target_letter_to_int['<PAD>'])) sess = tf.InteractiveSession() tf.global_variables_initializer() epoch_i = 1 batch_i = 0 (targets_batch, sources_batch, targets_lengths, sources_lengths) = next(get_batches(train_target, train_source, batch_size,source_letter_to_int['<PAD>'],target_letter_to_int['<PAD>'])) input_data = sources_batch targets = targets_batch lr = learning_rate target_sequence_length = targets_lengths source_sequence_length= sources_lengths max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len') #Encoder source_vocab_size = len(source_letter_to_int) target_vocab_size = len(target_letter_to_int) def get_lstm_cell(rnn_size): lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return lstm_cell encoder_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size) cell = tf.contrib.rnn.MultiRNNCell([get_lstm_cell(rnn_size) for _ in range(num_layers)]) encoder_output, encoder_state = tf.nn.dynamic_rnn(cell, encoder_embed_input, sequence_length=source_sequence_length, dtype=tf.float32) ending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1]) decoder_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<GO>']), ending], 1) target_vocab_size = len(target_letter_to_int) decoder_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) decoder_embed_input = tf.nn.embedding_lookup(decoder_embeddings, decoder_input) def get_decoder_cell(rnn_size): decoder_cell = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) return decoder_cell cell = tf.contrib.rnn.MultiRNNCell([get_decoder_cell(rnn_size) for _ in range(num_layers)]) output_layer = Dense(target_vocab_size,kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope("decode"): training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=decoder_embed_input,sequence_length=target_sequence_length,time_major=False) training_decoder = tf.contrib.seq2seq.BasicDecoder(cell,training_helper,encoder_state,output_layer) training_decoder_output, _,_ = tf.contrib.seq2seq.dynamic_decode(training_decoder,impute_finished=True,maximum_iterations=max_target_sequence_length) with tf.variable_scope("decode", reuse=True): start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens') predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(decoder_embeddings,start_tokens,target_letter_to_int['<EOS>']) predicting_decoder = tf.contrib.seq2seq.BasicDecoder(cell,predicting_helper,encoder_state,output_layer) predicting_decoder_output, _,_ = tf.contrib.seq2seq.dynamic_decode(predicting_decoder,impute_finished=True,maximum_iterations=max_target_sequence_length) ``` [letters_source.txt](https://github.com/tensorflow/tensorflow/files/2823741/letters_source.txt) [letters_target.txt](https://github.com/tensorflow/tensorflow/files/2823742/letters_target.txt)
non_process
dynamic decode reports error under eager execution “valueerror the inequality of unknown tensorshapes is undefined tensorflow version use command below i have encountered a weird problem when transforming a usual code into eager execution mode after changing the placeholder input to numpy array by calling tf contrib dynamic decode training decoder impute finished true maximum iterations max target sequence length gives error “valueerror the inequality of unknown tensorshapes is undefined without activating eager execution error everything is fine code to reproduce the issue the code requires two files from github and import tensorflow as tf import tensorflow contrib eager as tfe tfe enable eager execution from distutils version import looseversion from tensorflow python layers core import dense assert looseversion tf version looseversion please use tensorflow version or newer print tensorflow version format tf version import numpy as np import time import tensorflow as tf with open data letters source txt r encoding utf as f source data f read with open data letters target txt r encoding utf as f target data f read def extract character vocab data special words set words list set int to vocab idx word for idx word in enumerate special words set words vocab to int word idx for idx word in int to vocab items return int to vocab vocab to int source int to letter source letter to int extract character vocab source data target int to letter target letter to int extract character vocab target data source int for letter in line for line in source data split n target int for letter in line for line in target data split n def get batches targets sources batch size source pad int target pad int for batch i in range len sources batch size start i batch i batch size sources batch sources targets batch targets pad sources batch np array pad sentence batch sources batch source pad int pad targets batch np array pad sentence batch targets batch target pad int pad targets lengths for target in pad targets batch pad targets lengths append len target pad source lengths for source in pad sources batch pad source lengths append len source yield pad targets batch pad sources batch pad targets lengths pad source lengths def pad sentence batch sentence batch pad int max sentence max return max sentence len sentence for sentence in sentence batch display step epochs batch size rnn size num layers encoding embedding size decoding embedding size learning rate checkpoint trained model ckpt train source source int train target target int valid source source int valid target target int valid targets batch valid sources batch valid targets lengths valid sources lengths next get batches valid target valid source batch size source letter to int target letter to int sess tf interactivesession tf global variables initializer epoch i batch i targets batch sources batch targets lengths sources lengths next get batches train target train source batch size source letter to int target letter to int input data sources batch targets targets batch lr learning rate target sequence length targets lengths source sequence length sources lengths max target sequence length tf reduce max target sequence length name max target len encoder source vocab size len source letter to int target vocab size len target letter to int def get lstm cell rnn size lstm cell tf contrib rnn lstmcell rnn size initializer tf random uniform initializer seed return lstm cell encoder embed input tf contrib layers embed sequence input data source vocab size encoding embedding size cell tf contrib rnn multirnncell encoder output encoder state tf nn dynamic rnn cell encoder embed input sequence length source sequence length dtype tf ending tf strided slice targets decoder input tf concat target letter to int ending target vocab size len target letter to int decoder embeddings tf variable tf random uniform decoder embed input tf nn embedding lookup decoder embeddings decoder input def get decoder cell rnn size decoder cell tf contrib rnn lstmcell rnn size initializer tf random uniform initializer seed return decoder cell cell tf contrib rnn multirnncell output layer dense target vocab size kernel initializer tf truncated normal initializer mean stddev with tf variable scope decode training helper tf contrib traininghelper inputs decoder embed input sequence length target sequence length time major false training decoder tf contrib basicdecoder cell training helper encoder state output layer training decoder output tf contrib dynamic decode training decoder impute finished true maximum iterations max target sequence length with tf variable scope decode reuse true start tokens tf tile tf constant dtype tf name start tokens predicting helper tf contrib greedyembeddinghelper decoder embeddings start tokens target letter to int predicting decoder tf contrib basicdecoder cell predicting helper encoder state output layer predicting decoder output tf contrib dynamic decode predicting decoder impute finished true maximum iterations max target sequence length
0
27,033
6,813,183,800
IssuesEvent
2017-11-06 08:10:06
BTDF/DeploymentFramework
https://api.github.com/repos/BTDF/DeploymentFramework
closed
Add support for ESB Toolkit 2.1 for BizTalk 2010
CodePlexMigrationInitiated enhancement ESB Toolkit Integration Impact: High Release 5.0
Need to add support for ESB Toolkit 2.1 to support BizTalk 2010. #### This work item was migrated from CodePlex CodePlex work item ID: '7507' Assigned to: 'tfabraham' Vote count: '1'
1.0
Add support for ESB Toolkit 2.1 for BizTalk 2010 - Need to add support for ESB Toolkit 2.1 to support BizTalk 2010. #### This work item was migrated from CodePlex CodePlex work item ID: '7507' Assigned to: 'tfabraham' Vote count: '1'
non_process
add support for esb toolkit for biztalk need to add support for esb toolkit to support biztalk this work item was migrated from codeplex codeplex work item id assigned to tfabraham vote count
0
143,007
11,503,005,421
IssuesEvent
2020-02-12 20:12:43
GetTerminus/terminus-ui
https://api.github.com/repos/GetTerminus/terminus-ui
closed
Ts Input not showing validations when touched
Needs: exploration P2: Urgent Target: latest Type: bug
#### 1. What is the expected behavior? An input with 'required' validation has red border when focused, and the validation message 'Required' below in red once touched. #### 2. What is the current behavior? Original: The form does not have a red border or the required string when the input is touched NOTE: In my tests (BC) the control does get the correct red border when focused - but the validation message doesn't appear. (setting `validateOnChange` to `true` inside the form field component didn't seem to have any effect). #### 3. What are the steps to reproduce? go to any ts-input with required validator and click in, click out. #### 4. Which versions of this library, Angular, TypeScript, & browsers are affected? - UI Library: Latest - Angular: v8
1.0
Ts Input not showing validations when touched - #### 1. What is the expected behavior? An input with 'required' validation has red border when focused, and the validation message 'Required' below in red once touched. #### 2. What is the current behavior? Original: The form does not have a red border or the required string when the input is touched NOTE: In my tests (BC) the control does get the correct red border when focused - but the validation message doesn't appear. (setting `validateOnChange` to `true` inside the form field component didn't seem to have any effect). #### 3. What are the steps to reproduce? go to any ts-input with required validator and click in, click out. #### 4. Which versions of this library, Angular, TypeScript, & browsers are affected? - UI Library: Latest - Angular: v8
non_process
ts input not showing validations when touched what is the expected behavior an input with required validation has red border when focused and the validation message required below in red once touched what is the current behavior original the form does not have a red border or the required string when the input is touched note in my tests bc the control does get the correct red border when focused but the validation message doesn t appear setting validateonchange to true inside the form field component didn t seem to have any effect what are the steps to reproduce go to any ts input with required validator and click in click out which versions of this library angular typescript browsers are affected ui library latest angular
0
12,062
14,739,713,089
IssuesEvent
2021-01-07 07:46:41
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Aavaz Team Testing
anc-process anp-important ant-support
In GitLab by @kdjstudios on Sep 14, 2018, 09:05 @all Hello Team, It has become extremely apparent that there is becoming a lackluster efforts from the team at testing. This is becoming more prevalent on a daily basis and almost every single issue that needs to be tested does not appear to be effectively tested, before it is coming to me for release tests. Issues: - No one is posting their testing procedure and testing results!!! - Come on team, this is the most basic of tasks and should be easily completed. We MUST document everything in order to insure proper communication. - I am having to test the core functionalities and find a lot of the issues. - Items should be extensively tested by your team first and then I should only be testing to approve and find typos, suggest enhancements, etc. - I should not be finding any issues and when I do it should be very seldom, not on every ticket. - More often now it seems your team is not checking to insure the correct branch is setup for me to test. - Thus resulting in unnecessary delays and even causing more issues at times. - Communciation - I am posting requests to have items reviewed in the morning meetings with @tim.traylor which (ussually) happen daily, but they appear to not be brought up. This is causing unneeded delays and tickets are taking way longer then they should. - In conjunction with that above, I don't think I seen many posts in Gitlab from those morning meetings. We MUST communicate effectively to the whole team. Always post any updates in Gitlab no matter how trivial or unimportant it may seem. - The team does not appear to be reaching out effectively and lots of things are still being assumed; which is causing extra work for both teams and sometimes even results in additional issues. - Priority - Tickets with lower priority are being worked on when we have a backlog of higher priority tickets. Please keep in mind always work on the highest priority tickets first, unless requested and approved specifically by @tim.traylor, Gary, or Myself. Thoughts? - Do we need to have your team filling out a test cases for each issue to better document the process? Then submitting that to us for review and follow up. - Does you team need other resources or more communication from our team in any way? NOTE: My daily testing time has gone from less then 1 hour (about 7-10 months ago) to about 2-3 hours a day minimum. My release testing time has gone from about 2 hours on Friday, to about 3+ hours. This also seems to correlate with the amount of post release fixes that are needed too. As over the last few months we are finding and having more issues after a release.
1.0
Aavaz Team Testing - In GitLab by @kdjstudios on Sep 14, 2018, 09:05 @all Hello Team, It has become extremely apparent that there is becoming a lackluster efforts from the team at testing. This is becoming more prevalent on a daily basis and almost every single issue that needs to be tested does not appear to be effectively tested, before it is coming to me for release tests. Issues: - No one is posting their testing procedure and testing results!!! - Come on team, this is the most basic of tasks and should be easily completed. We MUST document everything in order to insure proper communication. - I am having to test the core functionalities and find a lot of the issues. - Items should be extensively tested by your team first and then I should only be testing to approve and find typos, suggest enhancements, etc. - I should not be finding any issues and when I do it should be very seldom, not on every ticket. - More often now it seems your team is not checking to insure the correct branch is setup for me to test. - Thus resulting in unnecessary delays and even causing more issues at times. - Communciation - I am posting requests to have items reviewed in the morning meetings with @tim.traylor which (ussually) happen daily, but they appear to not be brought up. This is causing unneeded delays and tickets are taking way longer then they should. - In conjunction with that above, I don't think I seen many posts in Gitlab from those morning meetings. We MUST communicate effectively to the whole team. Always post any updates in Gitlab no matter how trivial or unimportant it may seem. - The team does not appear to be reaching out effectively and lots of things are still being assumed; which is causing extra work for both teams and sometimes even results in additional issues. - Priority - Tickets with lower priority are being worked on when we have a backlog of higher priority tickets. Please keep in mind always work on the highest priority tickets first, unless requested and approved specifically by @tim.traylor, Gary, or Myself. Thoughts? - Do we need to have your team filling out a test cases for each issue to better document the process? Then submitting that to us for review and follow up. - Does you team need other resources or more communication from our team in any way? NOTE: My daily testing time has gone from less then 1 hour (about 7-10 months ago) to about 2-3 hours a day minimum. My release testing time has gone from about 2 hours on Friday, to about 3+ hours. This also seems to correlate with the amount of post release fixes that are needed too. As over the last few months we are finding and having more issues after a release.
process
aavaz team testing in gitlab by kdjstudios on sep all hello team it has become extremely apparent that there is becoming a lackluster efforts from the team at testing this is becoming more prevalent on a daily basis and almost every single issue that needs to be tested does not appear to be effectively tested before it is coming to me for release tests issues no one is posting their testing procedure and testing results come on team this is the most basic of tasks and should be easily completed we must document everything in order to insure proper communication i am having to test the core functionalities and find a lot of the issues items should be extensively tested by your team first and then i should only be testing to approve and find typos suggest enhancements etc i should not be finding any issues and when i do it should be very seldom not on every ticket more often now it seems your team is not checking to insure the correct branch is setup for me to test thus resulting in unnecessary delays and even causing more issues at times communciation i am posting requests to have items reviewed in the morning meetings with tim traylor which ussually happen daily but they appear to not be brought up this is causing unneeded delays and tickets are taking way longer then they should in conjunction with that above i don t think i seen many posts in gitlab from those morning meetings we must communicate effectively to the whole team always post any updates in gitlab no matter how trivial or unimportant it may seem the team does not appear to be reaching out effectively and lots of things are still being assumed which is causing extra work for both teams and sometimes even results in additional issues priority tickets with lower priority are being worked on when we have a backlog of higher priority tickets please keep in mind always work on the highest priority tickets first unless requested and approved specifically by tim traylor gary or myself thoughts do we need to have your team filling out a test cases for each issue to better document the process then submitting that to us for review and follow up does you team need other resources or more communication from our team in any way note my daily testing time has gone from less then hour about months ago to about hours a day minimum my release testing time has gone from about hours on friday to about hours this also seems to correlate with the amount of post release fixes that are needed too as over the last few months we are finding and having more issues after a release
1
19,543
25,865,447,881
IssuesEvent
2022-12-13 20:27:13
carbon-design-system/ibm-cloud-cognitive
https://api.github.com/repos/carbon-design-system/ibm-cloud-cognitive
closed
Multi add select release review
type: process improvement component: AddSelect
## Review for release ### Readiness - [x] One or more scenarios for a design pattern have been identified as a useful unit of functionality to publish. - [x] A functioning component or components delivering those scenarios have been delivered and merged to main. - [x] Design maintainer has approved the implementation for those scenarios (via a comment on the relevant issue/epic). ### Engineering review - [x] Breaking changes have only been introduced with prior approval, discussion and documented in release notes. Ideally deprecation notices in the prior major version must have been added in a timely fashion. - [x] The implementation takes into account, and does not impair, remaining and anticipated design scenarios. - [x] Public component features (names, props, etc) are consistent with Carbon-defined conventions and are consistent internally. Where there isn't an existing convention to apply, ensure robust precedents are being established. - [x] The UI produced is accessible, responsive, translatable, cross-browser, and responds to the currently set Carbon theme. - [x] Components are functional components using hooks. - [x] Public components which produce DOM structures support className. - [x] Public components support a ref (via React.forwardRef). - [x] Public component supports a Devtools attribute - [x] All significant DOM elements have meaningful classes. - [x] Additional attributes that are not identified as props (such as title, aria-\*, etc) are passed through to an appropriate DOM node of the component as HTML attributes. - [x] No warnings, errors or log messages in the console. - [x] Each public component JS is exported in /src/components/index.js, each public component SCSS is included in /src/components/\_index.scss, and each public component has a flag in package-settings.js. - [x] Each public component SCSS lists all of the Carbon and C&CS components imported and used by the JavaScript code and explicitly imports the SCSS for each of these components. - [x] Imports updated accordingly for v11 branch ### Standards - [x] No linter warnings or errors are produced. - [x] Carbon tokens (theme, layout, motion) are used unless the design specifies otherwise. - [x] All components utilizing motion must include reduced-motion queries for accessibility purposes - read more here. - [x] Code is formatted according to prettier rules (no use of //prettier-ignore). - [x] Code is clear, maintainable and follows standard React practices and the code guidelines. - [x] Copyright header in every source file (js, css, scss etc.) with the appropriate years. ### Testing - [x] There is a set of test cases for the components. - [x] The tests exercise all inputs (props, slots, etc) and verify appropriate outputs. - [x] The tests validate the behaviors and interactions defined in the design where practical. - [x] The tests achieve 100% coverage. Usage of istanbul ignore is appropriate and not extensive. - [x] No warnings, errors or log messages in the test output. ### Documentation - [x] Source code is satisfactorily commented and provides DocGen comments for all public components and their props. - [x] There is a story for each design scenario. The stories expose all relevant props and actions, and additional usage documentation and code samples are available on the 'Docs' tab along with the props table. - [x] There is a sandbox (or multiple sandboxes if appropriate) on CodeSandbox for the components.
1.0
Multi add select release review - ## Review for release ### Readiness - [x] One or more scenarios for a design pattern have been identified as a useful unit of functionality to publish. - [x] A functioning component or components delivering those scenarios have been delivered and merged to main. - [x] Design maintainer has approved the implementation for those scenarios (via a comment on the relevant issue/epic). ### Engineering review - [x] Breaking changes have only been introduced with prior approval, discussion and documented in release notes. Ideally deprecation notices in the prior major version must have been added in a timely fashion. - [x] The implementation takes into account, and does not impair, remaining and anticipated design scenarios. - [x] Public component features (names, props, etc) are consistent with Carbon-defined conventions and are consistent internally. Where there isn't an existing convention to apply, ensure robust precedents are being established. - [x] The UI produced is accessible, responsive, translatable, cross-browser, and responds to the currently set Carbon theme. - [x] Components are functional components using hooks. - [x] Public components which produce DOM structures support className. - [x] Public components support a ref (via React.forwardRef). - [x] Public component supports a Devtools attribute - [x] All significant DOM elements have meaningful classes. - [x] Additional attributes that are not identified as props (such as title, aria-\*, etc) are passed through to an appropriate DOM node of the component as HTML attributes. - [x] No warnings, errors or log messages in the console. - [x] Each public component JS is exported in /src/components/index.js, each public component SCSS is included in /src/components/\_index.scss, and each public component has a flag in package-settings.js. - [x] Each public component SCSS lists all of the Carbon and C&CS components imported and used by the JavaScript code and explicitly imports the SCSS for each of these components. - [x] Imports updated accordingly for v11 branch ### Standards - [x] No linter warnings or errors are produced. - [x] Carbon tokens (theme, layout, motion) are used unless the design specifies otherwise. - [x] All components utilizing motion must include reduced-motion queries for accessibility purposes - read more here. - [x] Code is formatted according to prettier rules (no use of //prettier-ignore). - [x] Code is clear, maintainable and follows standard React practices and the code guidelines. - [x] Copyright header in every source file (js, css, scss etc.) with the appropriate years. ### Testing - [x] There is a set of test cases for the components. - [x] The tests exercise all inputs (props, slots, etc) and verify appropriate outputs. - [x] The tests validate the behaviors and interactions defined in the design where practical. - [x] The tests achieve 100% coverage. Usage of istanbul ignore is appropriate and not extensive. - [x] No warnings, errors or log messages in the test output. ### Documentation - [x] Source code is satisfactorily commented and provides DocGen comments for all public components and their props. - [x] There is a story for each design scenario. The stories expose all relevant props and actions, and additional usage documentation and code samples are available on the 'Docs' tab along with the props table. - [x] There is a sandbox (or multiple sandboxes if appropriate) on CodeSandbox for the components.
process
multi add select release review review for release readiness one or more scenarios for a design pattern have been identified as a useful unit of functionality to publish a functioning component or components delivering those scenarios have been delivered and merged to main design maintainer has approved the implementation for those scenarios via a comment on the relevant issue epic engineering review breaking changes have only been introduced with prior approval discussion and documented in release notes ideally deprecation notices in the prior major version must have been added in a timely fashion the implementation takes into account and does not impair remaining and anticipated design scenarios public component features names props etc are consistent with carbon defined conventions and are consistent internally where there isn t an existing convention to apply ensure robust precedents are being established the ui produced is accessible responsive translatable cross browser and responds to the currently set carbon theme components are functional components using hooks public components which produce dom structures support classname public components support a ref via react forwardref public component supports a devtools attribute all significant dom elements have meaningful classes additional attributes that are not identified as props such as title aria etc are passed through to an appropriate dom node of the component as html attributes no warnings errors or log messages in the console each public component js is exported in src components index js each public component scss is included in src components index scss and each public component has a flag in package settings js each public component scss lists all of the carbon and c cs components imported and used by the javascript code and explicitly imports the scss for each of these components imports updated accordingly for branch standards no linter warnings or errors are produced carbon tokens theme layout motion are used unless the design specifies otherwise all components utilizing motion must include reduced motion queries for accessibility purposes read more here code is formatted according to prettier rules no use of prettier ignore code is clear maintainable and follows standard react practices and the code guidelines copyright header in every source file js css scss etc with the appropriate years testing there is a set of test cases for the components the tests exercise all inputs props slots etc and verify appropriate outputs the tests validate the behaviors and interactions defined in the design where practical the tests achieve coverage usage of istanbul ignore is appropriate and not extensive no warnings errors or log messages in the test output documentation source code is satisfactorily commented and provides docgen comments for all public components and their props there is a story for each design scenario the stories expose all relevant props and actions and additional usage documentation and code samples are available on the docs tab along with the props table there is a sandbox or multiple sandboxes if appropriate on codesandbox for the components
1
4,952
7,800,967,797
IssuesEvent
2018-06-09 15:35:51
bonopi07/2018-1_advML_project
https://api.github.com/repos/bonopi07/2018-1_advML_project
closed
char-level embedding VS word-level embedding
deep learning processing data
주어진 데이터셋의 리뷰는 제한된 문자(총 46개)로 구성되어 있다. Deep Learning model에 학습할 때, char-level로 학습할지, word-level로 학습할지 정한다. 1. char-level char를 idx로 변환하는 dictionary를 만들어 학습 모델에 정수형 값을 전달하게 한다. 2. word-level 별도의 word embedding을 거친다 (e.g. Word2Vec).
1.0
char-level embedding VS word-level embedding - 주어진 데이터셋의 리뷰는 제한된 문자(총 46개)로 구성되어 있다. Deep Learning model에 학습할 때, char-level로 학습할지, word-level로 학습할지 정한다. 1. char-level char를 idx로 변환하는 dictionary를 만들어 학습 모델에 정수형 값을 전달하게 한다. 2. word-level 별도의 word embedding을 거친다 (e.g. Word2Vec).
process
char level embedding vs word level embedding 주어진 데이터셋의 리뷰는 제한된 문자 총 로 구성되어 있다 deep learning model에 학습할 때 char level로 학습할지 word level로 학습할지 정한다 char level char를 idx로 변환하는 dictionary를 만들어 학습 모델에 정수형 값을 전달하게 한다 word level 별도의 word embedding을 거친다 e g
1
203,867
7,077,952,355
IssuesEvent
2018-01-10 00:39:29
google/google-api-java-client
https://api.github.com/repos/google/google-api-java-client
closed
Google Contacts API v3 using OAuth 2 & Atom?
2–5 stars Type-Sample imported priority: p2
_From [burtonrh...@gmail.com](https://code.google.com/u/117848461069421880149/) on October 12, 2011 14:32:21_ Which Google API and version (e.g. Google Calendar Data API version 2)? Google Contacts API version 3 What format (e.g. JSON, Atom)? Atom What Authentation (e.g. OAuth, OAuth 2, ClientLogin )? OAuth 2 Java environment (e.g. Java 6, Android 2.3, App Engine)? Java 6 External references, such as API reference guide? Please provide any additional information below. This seems like a pretty likely use case and would really help new developers get started quickly since the new framework requires us to create our own xml data model. _Original issue: http://code.google.com/p/google-api-java-client/issues/detail?id=317_
1.0
Google Contacts API v3 using OAuth 2 & Atom? - _From [burtonrh...@gmail.com](https://code.google.com/u/117848461069421880149/) on October 12, 2011 14:32:21_ Which Google API and version (e.g. Google Calendar Data API version 2)? Google Contacts API version 3 What format (e.g. JSON, Atom)? Atom What Authentation (e.g. OAuth, OAuth 2, ClientLogin )? OAuth 2 Java environment (e.g. Java 6, Android 2.3, App Engine)? Java 6 External references, such as API reference guide? Please provide any additional information below. This seems like a pretty likely use case and would really help new developers get started quickly since the new framework requires us to create our own xml data model. _Original issue: http://code.google.com/p/google-api-java-client/issues/detail?id=317_
non_process
google contacts api using oauth atom from on october which google api and version e g google calendar data api version google contacts api version what format e g json atom atom what authentation e g oauth oauth clientlogin oauth java environment e g java android app engine java external references such as api reference guide please provide any additional information below this seems like a pretty likely use case and would really help new developers get started quickly since the new framework requires us to create our own xml data model original issue
0
9,884
12,887,693,478
IssuesEvent
2020-07-13 11:42:29
deepset-ai/haystack
https://api.github.com/repos/deepset-ai/haystack
closed
Issue with Tutorial2_Finetune_a_model_on_your_data
preprocessing question
I am trying to fine-tune a model with my custom data set in SQuAD format. I'm facing the following error: ``` IndexError: Cannot choose from an empty sequence The above exception was the direct cause of the following exception: IndexError Traceback (most recent call last) <ipython-input-6-667d6da5a3ea> in <module>() 1 #train_data = "data/squad20" 2 train_data = "/content/" ----> 3 reader.train(data_dir=train_data, train_filename="COI_json.json", use_gpu=False, n_epochs=1, save_dir="my_model") 4 frames /usr/lib/python3.6/multiprocessing/pool.py in next(self, timeout) 733 if success: 734 return value --> 735 raise value 736 737 __next__ = next # XXX IndexError: Cannot choose from an empty sequence ``` It occurred when I was running the following query: ``` train_data = "/content/" reader.train(data_dir=train_data, train_filename="COI_json.json", use_gpu=False, n_epochs=1, save_dir="my_model") ``` To get some insight into the data `"COI_json.json"`, please give a look [here.](https://github.com/anirbansaha96/AI-ML-Playground/blob/master/COI_json.json)
1.0
Issue with Tutorial2_Finetune_a_model_on_your_data - I am trying to fine-tune a model with my custom data set in SQuAD format. I'm facing the following error: ``` IndexError: Cannot choose from an empty sequence The above exception was the direct cause of the following exception: IndexError Traceback (most recent call last) <ipython-input-6-667d6da5a3ea> in <module>() 1 #train_data = "data/squad20" 2 train_data = "/content/" ----> 3 reader.train(data_dir=train_data, train_filename="COI_json.json", use_gpu=False, n_epochs=1, save_dir="my_model") 4 frames /usr/lib/python3.6/multiprocessing/pool.py in next(self, timeout) 733 if success: 734 return value --> 735 raise value 736 737 __next__ = next # XXX IndexError: Cannot choose from an empty sequence ``` It occurred when I was running the following query: ``` train_data = "/content/" reader.train(data_dir=train_data, train_filename="COI_json.json", use_gpu=False, n_epochs=1, save_dir="my_model") ``` To get some insight into the data `"COI_json.json"`, please give a look [here.](https://github.com/anirbansaha96/AI-ML-Playground/blob/master/COI_json.json)
process
issue with finetune a model on your data i am trying to fine tune a model with my custom data set in squad format i m facing the following error indexerror cannot choose from an empty sequence the above exception was the direct cause of the following exception indexerror traceback most recent call last in train data data train data content reader train data dir train data train filename coi json json use gpu false n epochs save dir my model frames usr lib multiprocessing pool py in next self timeout if success return value raise value next next xxx indexerror cannot choose from an empty sequence it occurred when i was running the following query train data content reader train data dir train data train filename coi json json use gpu false n epochs save dir my model to get some insight into the data coi json json please give a look
1
399,732
27,253,205,101
IssuesEvent
2023-02-22 09:39:38
ddnexus/pagy
https://api.github.com/repos/ddnexus/pagy
opened
Docs: Update AJAX doc
documentation
### Before submitting... - [X] I searched through the [Documentation](https://ddnexus.github.io/pagy/) - [X] I searched through the [Issues](https://github.com/ddnexus/pagy/issues) - [X] I searched through the [Q&A](https://github.com/ddnexus/pagy/discussions/categories/q-a) ### Description Consider updating [AJAX](https://ddnexus.github.io/pagy/docs/api/javascript/ajax/) for the modern day tooling - nobody uses .js.erb anymore - Turbo Streams (and Stimulus JS) are what they use, and it serves much the same purpose.
1.0
Docs: Update AJAX doc - ### Before submitting... - [X] I searched through the [Documentation](https://ddnexus.github.io/pagy/) - [X] I searched through the [Issues](https://github.com/ddnexus/pagy/issues) - [X] I searched through the [Q&A](https://github.com/ddnexus/pagy/discussions/categories/q-a) ### Description Consider updating [AJAX](https://ddnexus.github.io/pagy/docs/api/javascript/ajax/) for the modern day tooling - nobody uses .js.erb anymore - Turbo Streams (and Stimulus JS) are what they use, and it serves much the same purpose.
non_process
docs update ajax doc before submitting i searched through the i searched through the i searched through the description consider updating for the modern day tooling nobody uses js erb anymore turbo streams and stimulus js are what they use and it serves much the same purpose
0
15,190
18,958,291,744
IssuesEvent
2021-11-18 23:29:11
vectordotdev/vector
https://api.github.com/repos/vectordotdev/vector
closed
Support inline aliases (helper rules) in the `parse_grok` function
type: enhancement domain: processing domain: vrl
It's common for Grok parsers to allow for aliases, or helper rules, for patterns that simplify Grok parsing rules. Because VRL already have a `patterns` argument, I propose that we add an optional `aliases` argument that can be used in the patterns: ``` . |= parse_grok( .message, patterns: [ "%{_s3_bucket_owner} %{_s3_bucket} %{_date_access} (?>%{_client_ip}|-) %{_client_id} %{_request_id} %{_s3_operation} %{notSpace} "(?>%{_method} |)%{_url}(?> %{_version}|)" %{_status_code} %{_s3_error_code} (?>%{_bytes_written}|-) (?>%{_object_size}|-) %{_duration} (?>%{_request_processing_time}|-) "%{_referer}" "%{_user_agent}" %{_request_version_id}.*", "%{_s3_bucket_owner} %{_s3_bucket} %{_date_access} (?>%{_client_ip}|-) %{_client_id} %{_request_id} %{_s3_operation}.*" ] aliases: { "_s3_bucket_owner": "%{notSpace:s3.bucket_owner}", "_s3_bucket": "%{notSpace:s3.bucket}", "_s3_operation": "%{notSpace:s3.operation}", "_s3_error_code": "%{notSpace:s3.error_code:nullIf("-")}", "_request_processing_time": "%{integer:http.request_processing_time}", "_request_id": "%{notSpace:http.request_id}", "_request_version_id": "%{notSpace:http.request_version_id:nullIf("-")}", "_bytes_written": "%{integer:network.bytes_written}", "_bytes_read": "%{integer:network.bytes_read}", "_object_size": "%{integer:network.object_size}", "_client_ip": "%{ipOrHost:network.client.ip}", "_client_id": "%{notSpace:network.client.id}", "_version": "HTTP\/%{regex(\"\\d+\\.\\d+\"):http.version}", "_url": "%{notSpace:http.url}", "_ident": "%{notSpace:http.ident:nullIf("-")}", "_user_agent": "%{regex(\"[^\\\\"]*"):http.useragent}", "_referer": "%{notSpace:http.referer:nullIf("-")}", "_status_code": "%{integer:http.status_code}", "_method": "%{word:http.method}", "_duration": "%{integer:duration:scale(1000000)}", "_date_access \[%{date(\"dd/MMM/yyyy:HH:mm:ss Z\"):date_access}\]" } ) ```
1.0
Support inline aliases (helper rules) in the `parse_grok` function - It's common for Grok parsers to allow for aliases, or helper rules, for patterns that simplify Grok parsing rules. Because VRL already have a `patterns` argument, I propose that we add an optional `aliases` argument that can be used in the patterns: ``` . |= parse_grok( .message, patterns: [ "%{_s3_bucket_owner} %{_s3_bucket} %{_date_access} (?>%{_client_ip}|-) %{_client_id} %{_request_id} %{_s3_operation} %{notSpace} "(?>%{_method} |)%{_url}(?> %{_version}|)" %{_status_code} %{_s3_error_code} (?>%{_bytes_written}|-) (?>%{_object_size}|-) %{_duration} (?>%{_request_processing_time}|-) "%{_referer}" "%{_user_agent}" %{_request_version_id}.*", "%{_s3_bucket_owner} %{_s3_bucket} %{_date_access} (?>%{_client_ip}|-) %{_client_id} %{_request_id} %{_s3_operation}.*" ] aliases: { "_s3_bucket_owner": "%{notSpace:s3.bucket_owner}", "_s3_bucket": "%{notSpace:s3.bucket}", "_s3_operation": "%{notSpace:s3.operation}", "_s3_error_code": "%{notSpace:s3.error_code:nullIf("-")}", "_request_processing_time": "%{integer:http.request_processing_time}", "_request_id": "%{notSpace:http.request_id}", "_request_version_id": "%{notSpace:http.request_version_id:nullIf("-")}", "_bytes_written": "%{integer:network.bytes_written}", "_bytes_read": "%{integer:network.bytes_read}", "_object_size": "%{integer:network.object_size}", "_client_ip": "%{ipOrHost:network.client.ip}", "_client_id": "%{notSpace:network.client.id}", "_version": "HTTP\/%{regex(\"\\d+\\.\\d+\"):http.version}", "_url": "%{notSpace:http.url}", "_ident": "%{notSpace:http.ident:nullIf("-")}", "_user_agent": "%{regex(\"[^\\\\"]*"):http.useragent}", "_referer": "%{notSpace:http.referer:nullIf("-")}", "_status_code": "%{integer:http.status_code}", "_method": "%{word:http.method}", "_duration": "%{integer:duration:scale(1000000)}", "_date_access \[%{date(\"dd/MMM/yyyy:HH:mm:ss Z\"):date_access}\]" } ) ```
process
support inline aliases helper rules in the parse grok function it s common for grok parsers to allow for aliases or helper rules for patterns that simplify grok parsing rules because vrl already have a patterns argument i propose that we add an optional aliases argument that can be used in the patterns parse grok message patterns bucket owner bucket date access client ip client id request id operation notspace method url version status code error code bytes written object size duration request processing time referer user agent request version id bucket owner bucket date access client ip client id request id operation aliases bucket owner notspace bucket owner bucket notspace bucket operation notspace operation error code notspace error code nullif request processing time integer http request processing time request id notspace http request id request version id notspace http request version id nullif bytes written integer network bytes written bytes read integer network bytes read object size integer network object size client ip iporhost network client ip client id notspace network client id version http regex d d http version url notspace http url ident notspace http ident nullif user agent regex http useragent referer notspace http referer nullif status code integer http status code method word http method duration integer duration scale date access
1
14,766
25,723,188,188
IssuesEvent
2022-12-07 15:00:52
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
closed
Gemfile not updating when rangeStrategy=update-lockfile
type:bug priority-3-medium status:requirements reproduction:provided
### How are you running Renovate? Mend Renovate hosted app on github.com ### If you're self-hosting Renovate, tell us what version of Renovate you run. _No response_ ### Please select which platform you are using if self-hosting. _No response_ ### If you're self-hosting Renovate, tell us what version of the platform you run. _No response_ ### Was this something which used to work for you, and then stopped? I never saw this working ### Describe the bug The docs for rangeStrategy=update-lockfile say that it will fallback to rangeStrategy=replace when the new version is not supported by the existing range. Given flipper@<~ 0.22.2, which wouldn't support 0.25.0, Renovate tries to update the lockfile but then fails to run the lockfile update command since the Gemfile doesn't support the new version. Why isn't it updating the Gemfile? For reproduction, see https://github.com/narwold/renovate-test/blob/main/renovate.json Note that it does update the Gemfile when using the `replace` strategy (see previous attempt at https://github.com/narwold/renovate-test/pull/6) See previous discussion at https://github.com/renovatebot/renovate/discussions/16369 ### Relevant debug logs The log specifically mentions this error: ``` Bundler could not find compatible versions for gem \"flipper\":\n In snapshot (Gemfile.lock):\n flipper (>= 0.25.0)\n\n In Gemfile:\n flipper\n\n flipper-redis (~> 0.22.2) was resolved to 0.22.2, which depends on\n flipper (~> 0.22.2)\n\nRunning `bundle update` will rebuild your snapshot from scratch, using only\nthe gems in your Gemfile, which may resolve the conflict.\n ``` ...and if you run `bundle install` locally, it's the same result. If you manually update the Gemfile, then it works. ### Have you created a minimal reproduction repository? I have linked to a minimal reproduction repository in the bug description
1.0
Gemfile not updating when rangeStrategy=update-lockfile - ### How are you running Renovate? Mend Renovate hosted app on github.com ### If you're self-hosting Renovate, tell us what version of Renovate you run. _No response_ ### Please select which platform you are using if self-hosting. _No response_ ### If you're self-hosting Renovate, tell us what version of the platform you run. _No response_ ### Was this something which used to work for you, and then stopped? I never saw this working ### Describe the bug The docs for rangeStrategy=update-lockfile say that it will fallback to rangeStrategy=replace when the new version is not supported by the existing range. Given flipper@<~ 0.22.2, which wouldn't support 0.25.0, Renovate tries to update the lockfile but then fails to run the lockfile update command since the Gemfile doesn't support the new version. Why isn't it updating the Gemfile? For reproduction, see https://github.com/narwold/renovate-test/blob/main/renovate.json Note that it does update the Gemfile when using the `replace` strategy (see previous attempt at https://github.com/narwold/renovate-test/pull/6) See previous discussion at https://github.com/renovatebot/renovate/discussions/16369 ### Relevant debug logs The log specifically mentions this error: ``` Bundler could not find compatible versions for gem \"flipper\":\n In snapshot (Gemfile.lock):\n flipper (>= 0.25.0)\n\n In Gemfile:\n flipper\n\n flipper-redis (~> 0.22.2) was resolved to 0.22.2, which depends on\n flipper (~> 0.22.2)\n\nRunning `bundle update` will rebuild your snapshot from scratch, using only\nthe gems in your Gemfile, which may resolve the conflict.\n ``` ...and if you run `bundle install` locally, it's the same result. If you manually update the Gemfile, then it works. ### Have you created a minimal reproduction repository? I have linked to a minimal reproduction repository in the bug description
non_process
gemfile not updating when rangestrategy update lockfile how are you running renovate mend renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run no response please select which platform you are using if self hosting no response if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug the docs for rangestrategy update lockfile say that it will fallback to rangestrategy replace when the new version is not supported by the existing range given flipper which wouldn t support renovate tries to update the lockfile but then fails to run the lockfile update command since the gemfile doesn t support the new version why isn t it updating the gemfile for reproduction see note that it does update the gemfile when using the replace strategy see previous attempt at see previous discussion at relevant debug logs the log specifically mentions this error bundler could not find compatible versions for gem flipper n in snapshot gemfile lock n flipper n n in gemfile n flipper n n flipper redis was resolved to which depends on n flipper n nrunning bundle update will rebuild your snapshot from scratch using only nthe gems in your gemfile which may resolve the conflict n and if you run bundle install locally it s the same result if you manually update the gemfile then it works have you created a minimal reproduction repository i have linked to a minimal reproduction repository in the bug description
0
60,469
8,440,361,225
IssuesEvent
2018-10-18 06:58:51
MurhafSousli/ngx-disqus
https://api.github.com/repos/MurhafSousli/ngx-disqus
closed
More Options Doc Update
Documentation
Thank you so much, your work is amazing! The documentation requires a minor update for more options, where event (onNewComment) is (newComment) in disqus component, (onReady) => (ready), (onPaginate) => (paginate). Also the DisqusComment interface has changed from name to text in disqus model.
1.0
More Options Doc Update - Thank you so much, your work is amazing! The documentation requires a minor update for more options, where event (onNewComment) is (newComment) in disqus component, (onReady) => (ready), (onPaginate) => (paginate). Also the DisqusComment interface has changed from name to text in disqus model.
non_process
more options doc update thank you so much your work is amazing the documentation requires a minor update for more options where event onnewcomment is newcomment in disqus component onready ready onpaginate paginate also the disquscomment interface has changed from name to text in disqus model
0
20,062
26,552,013,001
IssuesEvent
2023-01-20 08:44:21
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
vuln test requests flagged as invalid requests
log-processing
requests like the following are ending up in the `invalid-requests.log` file (goaccess 1.7): ``` aaa.bbb.ccc.ddd - - [31/Dec/2022:06:27:37 +0000] "GET /api/?a=proxy:unix:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa|http://reddit.com/? HTTP/1.1" 404 3951 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2866.71 Safari/537.36" "-" 0.706 TLSv1.2 ``` not a big loss not being counted in the stats, but i don't see anything invalid about per se...
1.0
vuln test requests flagged as invalid requests - requests like the following are ending up in the `invalid-requests.log` file (goaccess 1.7): ``` aaa.bbb.ccc.ddd - - [31/Dec/2022:06:27:37 +0000] "GET /api/?a=proxy:unix:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa|http://reddit.com/? HTTP/1.1" 404 3951 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2866.71 Safari/537.36" "-" 0.706 TLSv1.2 ``` not a big loss not being counted in the stats, but i don't see anything invalid about per se...
process
vuln test requests flagged as invalid requests requests like the following are ending up in the invalid requests log file goaccess aaa bbb ccc ddd get api a proxy unix aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa http mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari not a big loss not being counted in the stats but i don t see anything invalid about per se
1
299,265
25,892,057,315
IssuesEvent
2022-12-14 18:48:59
oxidecomputer/propolis
https://api.github.com/repos/oxidecomputer/propolis
opened
Enable Crucible PHD tests in CI
enhancement storage testing
To run Crucible tests, PHD needs the path to a local `crucible-downstairs` binary that was built at the same commit as PHD's/Propolis's `crucible-client-types` dependency. Crucible's `build-release` buildomat job publishes an archive (`crucible-nightly.tar.gz`) that contains the downstairs binary (among other things), so is probably just a matter of finding the most appropriate way to pull that down in `phd-run.sh`.
1.0
Enable Crucible PHD tests in CI - To run Crucible tests, PHD needs the path to a local `crucible-downstairs` binary that was built at the same commit as PHD's/Propolis's `crucible-client-types` dependency. Crucible's `build-release` buildomat job publishes an archive (`crucible-nightly.tar.gz`) that contains the downstairs binary (among other things), so is probably just a matter of finding the most appropriate way to pull that down in `phd-run.sh`.
non_process
enable crucible phd tests in ci to run crucible tests phd needs the path to a local crucible downstairs binary that was built at the same commit as phd s propolis s crucible client types dependency crucible s build release buildomat job publishes an archive crucible nightly tar gz that contains the downstairs binary among other things so is probably just a matter of finding the most appropriate way to pull that down in phd run sh
0
51,862
13,652,416,539
IssuesEvent
2020-09-27 07:24:25
srivatsamarichi/simplcommerce
https://api.github.com/repos/srivatsamarichi/simplcommerce
closed
CVE-2020-11022 (Medium) detected in jquery-2.2.3.min.js, jquery-2.2.3.js
bug security vulnerability
## CVE-2020-11022 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.2.3.min.js</b>, <b>jquery-2.2.3.js</b></p></summary> <p> <details><summary><b>jquery-2.2.3.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.min.js</a></p> <p>Path to vulnerable library: simplcommerce/src/SimplCommerce.WebHost/wwwroot/lib/jquery/jquery.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-2.2.3.min.js** (Vulnerable Library) </details> <details><summary><b>jquery-2.2.3.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.js</a></p> <p>Path to vulnerable library: simplcommerce/src/SimplCommerce.WebHost/wwwroot/lib/jquery/jquery.js</p> <p> Dependency Hierarchy: - :x: **jquery-2.2.3.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/simplcommerce/commit/b22ddf3b78e19857f8994720873e675653b1d3cf">b22ddf3b78e19857f8994720873e675653b1d3cf</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jQuery - 3.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-11022 (Medium) detected in jquery-2.2.3.min.js, jquery-2.2.3.js - ## CVE-2020-11022 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.2.3.min.js</b>, <b>jquery-2.2.3.js</b></p></summary> <p> <details><summary><b>jquery-2.2.3.min.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.min.js</a></p> <p>Path to vulnerable library: simplcommerce/src/SimplCommerce.WebHost/wwwroot/lib/jquery/jquery.min.js</p> <p> Dependency Hierarchy: - :x: **jquery-2.2.3.min.js** (Vulnerable Library) </details> <details><summary><b>jquery-2.2.3.js</b></p></summary> <p>JavaScript library for DOM operations</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.js</a></p> <p>Path to vulnerable library: simplcommerce/src/SimplCommerce.WebHost/wwwroot/lib/jquery/jquery.js</p> <p> Dependency Hierarchy: - :x: **jquery-2.2.3.js** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/simplcommerce/commit/b22ddf3b78e19857f8994720873e675653b1d3cf">b22ddf3b78e19857f8994720873e675653b1d3cf</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0. <p>Publish Date: 2020-04-29 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p> <p>Release Date: 2020-04-29</p> <p>Fix Resolution: jQuery - 3.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in jquery min js jquery js cve medium severity vulnerability vulnerable libraries jquery min js jquery js jquery min js javascript library for dom operations library home page a href path to vulnerable library simplcommerce src simplcommerce webhost wwwroot lib jquery jquery min js dependency hierarchy x jquery min js vulnerable library jquery js javascript library for dom operations library home page a href path to vulnerable library simplcommerce src simplcommerce webhost wwwroot lib jquery jquery js dependency hierarchy x jquery js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
0
55,391
30,729,521,239
IssuesEvent
2023-07-27 23:19:33
c3d/DB48X-on-DM42
https://api.github.com/repos/c3d/DB48X-on-DM42
closed
Avoid looping over each function pointer in the user interface
performance
The garbage collector currently loops over all function pointers in the user interface. This is quite inefficient, and also totally useless given that we don't yet use key assignments except for the function keys, and then only assign a static object for it.
True
Avoid looping over each function pointer in the user interface - The garbage collector currently loops over all function pointers in the user interface. This is quite inefficient, and also totally useless given that we don't yet use key assignments except for the function keys, and then only assign a static object for it.
non_process
avoid looping over each function pointer in the user interface the garbage collector currently loops over all function pointers in the user interface this is quite inefficient and also totally useless given that we don t yet use key assignments except for the function keys and then only assign a static object for it
0
1,143
3,631,808,690
IssuesEvent
2016-02-11 04:46:25
nodejs/node
https://api.github.com/repos/nodejs/node
closed
test-child-process-emfile.js fails on freebsd
child_process freebsd test
I haven't had a chance to look into specifics, but just wanted to note it. Here is the error: ``` not ok 878 test-child-process-emfile.js # #assert.js:89 # throw new assert.AssertionError({ # ^ #AssertionError: 0 undefined null # at emitTwo (events.js:87:13) # at ChildProcess.emit (events.js:172:7) # at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12) ```
1.0
test-child-process-emfile.js fails on freebsd - I haven't had a chance to look into specifics, but just wanted to note it. Here is the error: ``` not ok 878 test-child-process-emfile.js # #assert.js:89 # throw new assert.AssertionError({ # ^ #AssertionError: 0 undefined null # at emitTwo (events.js:87:13) # at ChildProcess.emit (events.js:172:7) # at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12) ```
process
test child process emfile js fails on freebsd i haven t had a chance to look into specifics but just wanted to note it here is the error not ok test child process emfile js assert js throw new assert assertionerror assertionerror undefined null at emittwo events js at childprocess emit events js at process childprocess handle onexit internal child process js
1
234,209
25,806,043,338
IssuesEvent
2022-12-11 12:20:35
SmartBear/readyapi-swagger-assertion-plugin
https://api.github.com/repos/SmartBear/readyapi-swagger-assertion-plugin
closed
CVE-2021-43859 (High) detected in xstream-1.3.1.jar - autoclosed
security vulnerability
## CVE-2021-43859 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary> <p></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/thoughtworks/xstream/1.3.1/xstream-1.3.1.jar</p> <p> Dependency Hierarchy: - ready-api-soapui-pro-1.7.0.jar (Root Library) - ready-api-soapui-1.7.0.jar - :x: **xstream-1.3.1.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XStream is an open source java library to serialize objects to XML and back again. Versions prior to 1.4.19 may allow a remote attacker to allocate 100% CPU time on the target system depending on CPU type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream. XStream 1.4.19 monitors and accumulates the time it takes to add elements to collections and throws an exception if a set threshold is exceeded. Users are advised to upgrade as soon as possible. Users unable to upgrade may set the NO_REFERENCE mode to prevent recursion. See GHSA-rmr5-cpv2-vgjf for further details on a workaround if an upgrade is not possible. <p>Publish Date: 2022-02-01 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-43859>CVE-2021-43859</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-rmr5-cpv2-vgjf">https://github.com/advisories/GHSA-rmr5-cpv2-vgjf</a></p> <p>Release Date: 2022-02-01</p> <p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.19</p> </p> </details> <p></p>
True
CVE-2021-43859 (High) detected in xstream-1.3.1.jar - autoclosed - ## CVE-2021-43859 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.3.1.jar</b></p></summary> <p></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/thoughtworks/xstream/1.3.1/xstream-1.3.1.jar</p> <p> Dependency Hierarchy: - ready-api-soapui-pro-1.7.0.jar (Root Library) - ready-api-soapui-1.7.0.jar - :x: **xstream-1.3.1.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XStream is an open source java library to serialize objects to XML and back again. Versions prior to 1.4.19 may allow a remote attacker to allocate 100% CPU time on the target system depending on CPU type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream. XStream 1.4.19 monitors and accumulates the time it takes to add elements to collections and throws an exception if a set threshold is exceeded. Users are advised to upgrade as soon as possible. Users unable to upgrade may set the NO_REFERENCE mode to prevent recursion. See GHSA-rmr5-cpv2-vgjf for further details on a workaround if an upgrade is not possible. <p>Publish Date: 2022-02-01 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-43859>CVE-2021-43859</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-rmr5-cpv2-vgjf">https://github.com/advisories/GHSA-rmr5-cpv2-vgjf</a></p> <p>Release Date: 2022-02-01</p> <p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.19</p> </p> </details> <p></p>
non_process
cve high detected in xstream jar autoclosed cve high severity vulnerability vulnerable library xstream jar path to dependency file pom xml path to vulnerable library home wss scanner repository thoughtworks xstream xstream jar dependency hierarchy ready api soapui pro jar root library ready api soapui jar x xstream jar vulnerable library found in base branch master vulnerability details xstream is an open source java library to serialize objects to xml and back again versions prior to may allow a remote attacker to allocate cpu time on the target system depending on cpu type or parallel execution of such a payload resulting in a denial of service only by manipulating the processed input stream xstream monitors and accumulates the time it takes to add elements to collections and throws an exception if a set threshold is exceeded users are advised to upgrade as soon as possible users unable to upgrade may set the no reference mode to prevent recursion see ghsa vgjf for further details on a workaround if an upgrade is not possible publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream
0
8,901
11,995,404,024
IssuesEvent
2020-04-08 15:09:30
nanoframework/Home
https://api.github.com/repos/nanoframework/Home
closed
Problem with new MDP and delegates
Area: Metadata Processor Priority: High Type: Bug up-for-grabs
## Details about Problem **nanoFramework area:** Visual Studio extension **VS version<!--(if relevant)-->:** **VS extension version<!--(if relevant)-->:** 2017.1.8.10 **Target<!--(if relevant)-->:** **Firmware image version<!--(if relevant)-->:** **Device capabilities output<!--(if relevant)-->:** ## Description Cannot compile program : ![image](https://user-images.githubusercontent.com/47118568/77903677-8a9c4200-7283-11ea-9bc6-be6d5fad71e3.png) ## Detailed repro steps so we can see the same problem See attached program [Delegates.zip](https://github.com/nanoframework/Home/files/4402188/Delegates.zip) ## Other suggested things <!-- if applicable/relevant --> ## Expected behaviour ## Screenshot <!-- if applicable/relevant --> <!--Very helpful if you send along a few screenshots to help visualize the issue!--> ## Additional context
1.0
Problem with new MDP and delegates - ## Details about Problem **nanoFramework area:** Visual Studio extension **VS version<!--(if relevant)-->:** **VS extension version<!--(if relevant)-->:** 2017.1.8.10 **Target<!--(if relevant)-->:** **Firmware image version<!--(if relevant)-->:** **Device capabilities output<!--(if relevant)-->:** ## Description Cannot compile program : ![image](https://user-images.githubusercontent.com/47118568/77903677-8a9c4200-7283-11ea-9bc6-be6d5fad71e3.png) ## Detailed repro steps so we can see the same problem See attached program [Delegates.zip](https://github.com/nanoframework/Home/files/4402188/Delegates.zip) ## Other suggested things <!-- if applicable/relevant --> ## Expected behaviour ## Screenshot <!-- if applicable/relevant --> <!--Very helpful if you send along a few screenshots to help visualize the issue!--> ## Additional context
process
problem with new mdp and delegates details about problem nanoframework area visual studio extension vs version vs extension version target firmware image version device capabilities output description cannot compile program detailed repro steps so we can see the same problem see attached program other suggested things expected behaviour screenshot additional context
1
13,380
15,857,318,030
IssuesEvent
2021-04-08 04:30:18
log2timeline/plaso
https://api.github.com/repos/log2timeline/plaso
closed
preprocessing: failing on RHEL6 image due to symbolic link
enhancement preprocessing
**Description of problem:** I am trying to create plaso from vmdk with the following command line and it throws an error. **Command line and arguments:** log2timeline.py --partitions all --volumes all /data/server.plaso /data/server.vmdk **Source data:** # Disk DescriptorFile version=3 encoding="UTF-8" CID=577c21a4 parentCID=ffffffff createType="vmfs" **Plaso version:** 20210213 **Operating system Plaso is running on:** CentOS 7 **Installation method:** Plaso installed from pip3. All the dependencies are met. `# log2timeline.py --troubles 2021-03-06 22:50:11,972 [INFO] (MainProcess) PID:10756 <data_location> Determined data location: /usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/share/plaso Using Python version 3.6.8 (default, Apr 2 2020, 13:34:55) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] Path: /usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/EGG-INFO/scripts/log2timeline.py plaso - log2timeline version 20210213 Checking availability and versions of dependencies. [OK] artifacts version: 20200515 [OK] bencode [OK] certifi version: 2019.11.28 [OK] chardet version: 3.0.4 [OK] cryptography version: 3.3.2 [OK] dateutil version: 2.8.1 [OK] defusedxml version: 0.5.0 [OK] dfdatetime version: 20200824 [OK] dfvfs version: 20210213 [OK] dfwinreg version: 20201006 [OK] dtfabric version: 20200621 [OK] elasticsearch version: 7.5.1 [OK] future version: 0.18.2 [OK] idna version: 3.1 [OK] lz4 version: 3.0.2 [OK] pefile version: 2019.4.18 [OK] psutil version: 5.7.0 [OK] pybde version: 20191221 [OK] pycreg version: 20200725 [OK] pyesedb version: 20200418 [OK] pyevt version: 20200418 [OK] pyevtx version: 20200419 [OK] pyewf version: 20140806 [OK] pyfsapfs version: 20201107 [OK] pyfsext version: 20210129 [OK] pyfshfs version: 20201104 [OK] pyfsntfs version: 20201115 [OK] pyfsxfs version: 20201117 [OK] pyfvde version: 20191221 [OK] pyfwnt version: 20191222 [OK] pyfwsi version: 20191221 [OK] pylnk version: 20191221 [OK] pyluksde version: 20200205 [OK] pymsiecf version: 20191221 [OK] pyolecf version: 20191221 [OK] pyparsing version: 2.4.7 [OK] pyqcow version: 20191221 [OK] pyregf version: 20201007 [OK] pyscca version: 20191222 [OK] pysigscan version: 20191221 [OK] pysmdev version: 20200210 [OK] pysmraw version: 20191221 [OK] pytsk3 version: 20200117 [OK] pytz [OK] pyvhdi version: 20191221 [OK] pyvmdk version: 20191221 [OK] pyvsgpt version: 20210207 [OK] pyvshadow version: 20191221 [OK] pyvslvm version: 20200102 [OK] redis version: 3.5.3 [OK] requests version: 2.25.0 [OK] six version: 1.12.0 [OK] urllib3 version: 1.25.8 [OK] xlsxwriter version: 1.2.8 [OK] yaml version: 5.4.1 [OK] yara version: 3.11.0 [OK] zmq version: 14.7.0` **Debug output/tracebacks:** `Processing started. Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/dfvfs/resolver/resolver.py", line 109, in OpenFileObject file_object.Open() File "/usr/local/lib/python3.6/site-packages/dfvfs/file_io/file_io.py", line 89, in Open self._Open(mode=mode) File "/usr/local/lib/python3.6/site-packages/dfvfs/file_io/tsk_file_io.py", line 117, in _Open raise IOError('Not a regular file.') OSError: Not a regular file. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/bin/log2timeline.py", line 4, in <module> __import__('pkg_resources').run_script('plaso==20210213', 'log2timeline.py') File "/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 667, in run_script self.require(requires)[0].run_script(script_name, ns) File "/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 1464, in run_script exec(code, namespace, namespace) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/EGG-INFO/scripts/log2timeline.py", line 94, in <module> if not Main(): File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/EGG-INFO/scripts/log2timeline.py", line 69, in Main tool.ExtractEventsFromSources() File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/cli/log2timeline_tool.py", line 414, in ExtractEventsFromSources self._PreprocessSources(extraction_engine) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/cli/extraction_tool.py", line 331, in _PreprocessSources resolver_context=self._resolver_context) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/engine/engine.py", line 287, in PreprocessSources self.knowledge_base) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/preprocessors/manager.py", line 317, in RunPlugins artifacts_registry, knowledge_base, searcher, file_system) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/preprocessors/manager.py", line 155, in CollectFromFileSystem knowledge_base, artifact_definition, searcher, file_system) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/preprocessors/interface.py", line 78, in Collect source.separator) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/preprocessors/interface.py", line 132, in _ParsePathSpecification self._ParseFileEntry(knowledge_base, file_entry) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/preprocessors/interface.py", line 166, in _ParseFileEntry file_object = file_entry.GetFileObject() File "/usr/local/lib/python3.6/site-packages/dfvfs/vfs/tsk_file_entry.py", line 804, in GetFileObject path_spec, resolver_context=self._resolver_context) File "/usr/local/lib/python3.6/site-packages/dfvfs/resolver/resolver.py", line 112, in OpenFileObject 'Unable to open file object with error: {0!s}'.format(exception)) dfvfs.lib.errors.BackEndError: Unable to open file object with error: Not a regular file.`
1.0
preprocessing: failing on RHEL6 image due to symbolic link - **Description of problem:** I am trying to create plaso from vmdk with the following command line and it throws an error. **Command line and arguments:** log2timeline.py --partitions all --volumes all /data/server.plaso /data/server.vmdk **Source data:** # Disk DescriptorFile version=3 encoding="UTF-8" CID=577c21a4 parentCID=ffffffff createType="vmfs" **Plaso version:** 20210213 **Operating system Plaso is running on:** CentOS 7 **Installation method:** Plaso installed from pip3. All the dependencies are met. `# log2timeline.py --troubles 2021-03-06 22:50:11,972 [INFO] (MainProcess) PID:10756 <data_location> Determined data location: /usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/share/plaso Using Python version 3.6.8 (default, Apr 2 2020, 13:34:55) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] Path: /usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/EGG-INFO/scripts/log2timeline.py plaso - log2timeline version 20210213 Checking availability and versions of dependencies. [OK] artifacts version: 20200515 [OK] bencode [OK] certifi version: 2019.11.28 [OK] chardet version: 3.0.4 [OK] cryptography version: 3.3.2 [OK] dateutil version: 2.8.1 [OK] defusedxml version: 0.5.0 [OK] dfdatetime version: 20200824 [OK] dfvfs version: 20210213 [OK] dfwinreg version: 20201006 [OK] dtfabric version: 20200621 [OK] elasticsearch version: 7.5.1 [OK] future version: 0.18.2 [OK] idna version: 3.1 [OK] lz4 version: 3.0.2 [OK] pefile version: 2019.4.18 [OK] psutil version: 5.7.0 [OK] pybde version: 20191221 [OK] pycreg version: 20200725 [OK] pyesedb version: 20200418 [OK] pyevt version: 20200418 [OK] pyevtx version: 20200419 [OK] pyewf version: 20140806 [OK] pyfsapfs version: 20201107 [OK] pyfsext version: 20210129 [OK] pyfshfs version: 20201104 [OK] pyfsntfs version: 20201115 [OK] pyfsxfs version: 20201117 [OK] pyfvde version: 20191221 [OK] pyfwnt version: 20191222 [OK] pyfwsi version: 20191221 [OK] pylnk version: 20191221 [OK] pyluksde version: 20200205 [OK] pymsiecf version: 20191221 [OK] pyolecf version: 20191221 [OK] pyparsing version: 2.4.7 [OK] pyqcow version: 20191221 [OK] pyregf version: 20201007 [OK] pyscca version: 20191222 [OK] pysigscan version: 20191221 [OK] pysmdev version: 20200210 [OK] pysmraw version: 20191221 [OK] pytsk3 version: 20200117 [OK] pytz [OK] pyvhdi version: 20191221 [OK] pyvmdk version: 20191221 [OK] pyvsgpt version: 20210207 [OK] pyvshadow version: 20191221 [OK] pyvslvm version: 20200102 [OK] redis version: 3.5.3 [OK] requests version: 2.25.0 [OK] six version: 1.12.0 [OK] urllib3 version: 1.25.8 [OK] xlsxwriter version: 1.2.8 [OK] yaml version: 5.4.1 [OK] yara version: 3.11.0 [OK] zmq version: 14.7.0` **Debug output/tracebacks:** `Processing started. Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/dfvfs/resolver/resolver.py", line 109, in OpenFileObject file_object.Open() File "/usr/local/lib/python3.6/site-packages/dfvfs/file_io/file_io.py", line 89, in Open self._Open(mode=mode) File "/usr/local/lib/python3.6/site-packages/dfvfs/file_io/tsk_file_io.py", line 117, in _Open raise IOError('Not a regular file.') OSError: Not a regular file. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/bin/log2timeline.py", line 4, in <module> __import__('pkg_resources').run_script('plaso==20210213', 'log2timeline.py') File "/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 667, in run_script self.require(requires)[0].run_script(script_name, ns) File "/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 1464, in run_script exec(code, namespace, namespace) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/EGG-INFO/scripts/log2timeline.py", line 94, in <module> if not Main(): File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/EGG-INFO/scripts/log2timeline.py", line 69, in Main tool.ExtractEventsFromSources() File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/cli/log2timeline_tool.py", line 414, in ExtractEventsFromSources self._PreprocessSources(extraction_engine) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/cli/extraction_tool.py", line 331, in _PreprocessSources resolver_context=self._resolver_context) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/engine/engine.py", line 287, in PreprocessSources self.knowledge_base) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/preprocessors/manager.py", line 317, in RunPlugins artifacts_registry, knowledge_base, searcher, file_system) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/preprocessors/manager.py", line 155, in CollectFromFileSystem knowledge_base, artifact_definition, searcher, file_system) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/preprocessors/interface.py", line 78, in Collect source.separator) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/preprocessors/interface.py", line 132, in _ParsePathSpecification self._ParseFileEntry(knowledge_base, file_entry) File "/usr/local/lib/python3.6/site-packages/plaso-20210213-py3.6.egg/plaso/preprocessors/interface.py", line 166, in _ParseFileEntry file_object = file_entry.GetFileObject() File "/usr/local/lib/python3.6/site-packages/dfvfs/vfs/tsk_file_entry.py", line 804, in GetFileObject path_spec, resolver_context=self._resolver_context) File "/usr/local/lib/python3.6/site-packages/dfvfs/resolver/resolver.py", line 112, in OpenFileObject 'Unable to open file object with error: {0!s}'.format(exception)) dfvfs.lib.errors.BackEndError: Unable to open file object with error: Not a regular file.`
process
preprocessing failing on image due to symbolic link description of problem i am trying to create plaso from vmdk with the following command line and it throws an error command line and arguments py partitions all volumes all data server plaso data server vmdk source data disk descriptorfile version encoding utf cid parentcid ffffffff createtype vmfs plaso version operating system plaso is running on centos installation method plaso installed from all the dependencies are met py troubles mainprocess pid determined data location usr local lib site packages plaso egg share plaso using python version default apr path usr local lib site packages plaso egg egg info scripts py plaso version checking availability and versions of dependencies artifacts version bencode certifi version chardet version cryptography version dateutil version defusedxml version dfdatetime version dfvfs version dfwinreg version dtfabric version elasticsearch version future version idna version version pefile version psutil version pybde version pycreg version pyesedb version pyevt version pyevtx version pyewf version pyfsapfs version pyfsext version pyfshfs version pyfsntfs version pyfsxfs version pyfvde version pyfwnt version pyfwsi version pylnk version pyluksde version pymsiecf version pyolecf version pyparsing version pyqcow version pyregf version pyscca version pysigscan version pysmdev version pysmraw version version pytz pyvhdi version pyvmdk version pyvsgpt version pyvshadow version pyvslvm version redis version requests version six version version xlsxwriter version yaml version yara version zmq version debug output tracebacks processing started traceback most recent call last file usr local lib site packages dfvfs resolver resolver py line in openfileobject file object open file usr local lib site packages dfvfs file io file io py line in open self open mode mode file usr local lib site packages dfvfs file io tsk file io py line in open raise ioerror not a regular file oserror not a regular file during handling of the above exception another exception occurred traceback most recent call last file usr local bin py line in import pkg resources run script plaso py file usr local lib site packages pkg resources init py line in run script self require requires run script script name ns file usr local lib site packages pkg resources init py line in run script exec code namespace namespace file usr local lib site packages plaso egg egg info scripts py line in if not main file usr local lib site packages plaso egg egg info scripts py line in main tool extracteventsfromsources file usr local lib site packages plaso egg plaso cli tool py line in extracteventsfromsources self preprocesssources extraction engine file usr local lib site packages plaso egg plaso cli extraction tool py line in preprocesssources resolver context self resolver context file usr local lib site packages plaso egg plaso engine engine py line in preprocesssources self knowledge base file usr local lib site packages plaso egg plaso preprocessors manager py line in runplugins artifacts registry knowledge base searcher file system file usr local lib site packages plaso egg plaso preprocessors manager py line in collectfromfilesystem knowledge base artifact definition searcher file system file usr local lib site packages plaso egg plaso preprocessors interface py line in collect source separator file usr local lib site packages plaso egg plaso preprocessors interface py line in parsepathspecification self parsefileentry knowledge base file entry file usr local lib site packages plaso egg plaso preprocessors interface py line in parsefileentry file object file entry getfileobject file usr local lib site packages dfvfs vfs tsk file entry py line in getfileobject path spec resolver context self resolver context file usr local lib site packages dfvfs resolver resolver py line in openfileobject unable to open file object with error s format exception dfvfs lib errors backenderror unable to open file object with error not a regular file
1
57,916
14,242,019,082
IssuesEvent
2020-11-19 00:45:23
xamarin/xamarin-android
https://api.github.com/repos/xamarin/xamarin-android
closed
[Bug] Warning in Google Console when publishing an app from Visual Studio
Area: App+Library Build
### Description When I published my app to the new (beta) Google console, i got two warnings, one of them was "This App Bundle contains Java/Kotlin code, which might be obfuscated." I found this issue that describes my situation: https://github.com/apache/cordova-android/issues/1008 From what I read it seems that the publishing procedure inside visual studio needs to be updated to upload some additional file so this warning goes away.
1.0
[Bug] Warning in Google Console when publishing an app from Visual Studio - ### Description When I published my app to the new (beta) Google console, i got two warnings, one of them was "This App Bundle contains Java/Kotlin code, which might be obfuscated." I found this issue that describes my situation: https://github.com/apache/cordova-android/issues/1008 From what I read it seems that the publishing procedure inside visual studio needs to be updated to upload some additional file so this warning goes away.
non_process
warning in google console when publishing an app from visual studio description when i published my app to the new beta google console i got two warnings one of them was this app bundle contains java kotlin code which might be obfuscated i found this issue that describes my situation from what i read it seems that the publishing procedure inside visual studio needs to be updated to upload some additional file so this warning goes away
0
1,896
4,725,787,471
IssuesEvent
2016-10-18 08:04:59
opentrials/opentrials
https://api.github.com/repos/opentrials/opentrials
opened
Refactor how we handle identifiers
API Explorer Processors refactoring
Currently, the trials' identifiers is simply a `jsonb` field in the `trials` table, where we save a dict like: ```javascript { 'nct': 'NCT00000000', 'ictrp': 'ICTRP00000000', } ``` This is simple and has been serving us well so far. However, there're a few disadvantages: * We can only have one identifier per source (see #299 for a case where that's an issue) * ... Given this, we should extract the identifiers in an actual `identifiers` table. It must have, at minimum, `identifier`, `source_id`, and the ID of the trial. We also want to know from which record an identifier came from. As the records have relations with the trials, a denormalized table would have `Identifiers` with only `source_id`, `record_id`, `identifier`, and to get a trials' identifiers we would do: ```sql SELECT trials.id, identifiers.identifier FROM trials INNER JOIN records ON records.trial_id = trials.id INNER JOIN identifiers ON identifiers.record_id = records.id ``` This seems like too cumbersome, and we also loose a very interesting feature of having a separate identifiers table: saving our generated trial IDs. Right now, we have to save our whole trials table if we want to keep the same trial IDs existing trials, so we can't simply delete the `trials` table and re-create it by running our collectors/processors again. They would have new OpenTrials IDs if we did so. With a separate `identifiers` table containing the `trial_id`s, as long as we keep the `identifiers` table untouched, we can delete the `trials` table and recreate it with the same identifiers by simply looking for them in the `identifiers` table. Because of this, we shouldn't strive for a normalized `identifiers` table. The schema I'm thinking is: | trial_id | record_id | source_id | identifier | | --- | --- | --- | --- | | TRIAL_1 | RECORD_1 | SOURCE_1 | 'NCT0000000' | | TRIAL_1 | RECORD_2 | SOURCE_2 | 'ICTRP00000' | | ... | ... | ... | ... | @akariv @pwalsh @opentrials/core Please add your thoughts here. Does it look sane?
1.0
Refactor how we handle identifiers - Currently, the trials' identifiers is simply a `jsonb` field in the `trials` table, where we save a dict like: ```javascript { 'nct': 'NCT00000000', 'ictrp': 'ICTRP00000000', } ``` This is simple and has been serving us well so far. However, there're a few disadvantages: * We can only have one identifier per source (see #299 for a case where that's an issue) * ... Given this, we should extract the identifiers in an actual `identifiers` table. It must have, at minimum, `identifier`, `source_id`, and the ID of the trial. We also want to know from which record an identifier came from. As the records have relations with the trials, a denormalized table would have `Identifiers` with only `source_id`, `record_id`, `identifier`, and to get a trials' identifiers we would do: ```sql SELECT trials.id, identifiers.identifier FROM trials INNER JOIN records ON records.trial_id = trials.id INNER JOIN identifiers ON identifiers.record_id = records.id ``` This seems like too cumbersome, and we also loose a very interesting feature of having a separate identifiers table: saving our generated trial IDs. Right now, we have to save our whole trials table if we want to keep the same trial IDs existing trials, so we can't simply delete the `trials` table and re-create it by running our collectors/processors again. They would have new OpenTrials IDs if we did so. With a separate `identifiers` table containing the `trial_id`s, as long as we keep the `identifiers` table untouched, we can delete the `trials` table and recreate it with the same identifiers by simply looking for them in the `identifiers` table. Because of this, we shouldn't strive for a normalized `identifiers` table. The schema I'm thinking is: | trial_id | record_id | source_id | identifier | | --- | --- | --- | --- | | TRIAL_1 | RECORD_1 | SOURCE_1 | 'NCT0000000' | | TRIAL_1 | RECORD_2 | SOURCE_2 | 'ICTRP00000' | | ... | ... | ... | ... | @akariv @pwalsh @opentrials/core Please add your thoughts here. Does it look sane?
process
refactor how we handle identifiers currently the trials identifiers is simply a jsonb field in the trials table where we save a dict like javascript nct ictrp this is simple and has been serving us well so far however there re a few disadvantages we can only have one identifier per source see for a case where that s an issue given this we should extract the identifiers in an actual identifiers table it must have at minimum identifier source id and the id of the trial we also want to know from which record an identifier came from as the records have relations with the trials a denormalized table would have identifiers with only source id record id identifier and to get a trials identifiers we would do sql select trials id identifiers identifier from trials inner join records on records trial id trials id inner join identifiers on identifiers record id records id this seems like too cumbersome and we also loose a very interesting feature of having a separate identifiers table saving our generated trial ids right now we have to save our whole trials table if we want to keep the same trial ids existing trials so we can t simply delete the trials table and re create it by running our collectors processors again they would have new opentrials ids if we did so with a separate identifiers table containing the trial id s as long as we keep the identifiers table untouched we can delete the trials table and recreate it with the same identifiers by simply looking for them in the identifiers table because of this we shouldn t strive for a normalized identifiers table the schema i m thinking is trial id record id source id identifier trial record source trial record source akariv pwalsh opentrials core please add your thoughts here does it look sane
1
20,200
26,777,100,394
IssuesEvent
2023-01-31 17:59:05
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Setting ditamap as args.resources makes the process replacing it by its temporary version
bug priority/medium preprocess
## Expected Behavior If I transform a topic into HTML5 and refer the ditamap as args.resources (for resolving keys), I expect the orinal files to remain the same. ## Actual Behavior The ditamap is modified by its temporary version and isn't valid anymore. Only applies to preprocess, preprocess2 is unaffected. ## Steps to Reproduce 1. Download and extract [oxygen_corrupted_map.zip](https://github.com/dita-ot/dita-ot/files/10401866/oxygen_corrupted_map.zip). 2. Run the following command: ``` dita -f html5 -i D:\oxygen_corrupted_map\topics\topic.dita -o D:\oxygen_corrupted_map\topics\out\topic-pdf -Dargs.resources=D:\oxygen_corrupted_map\map.ditamap -v ``` 4. The transformation finishes but the map links are broken. ## Copy of the error message, log file or stack trace The problem starts here: ``` debug-filter: [filter] Using Xerces grammar pool for DTD and schema caching. [filter] Processing file:/D:/oxygen_corrupted_map/map.ditamap to file:/D:/oxygen_corrupted_map/map.ditamap [filter] Processing file:/D:/oxygen_corrupted_map/topics/topic.dita to ... ``` ## Environment * DITA-OT version: 4.0.1 * Operating system and version: Windows * How did you run DITA-OT? `dita` command * Transformation type: HTML5
1.0
Setting ditamap as args.resources makes the process replacing it by its temporary version - ## Expected Behavior If I transform a topic into HTML5 and refer the ditamap as args.resources (for resolving keys), I expect the orinal files to remain the same. ## Actual Behavior The ditamap is modified by its temporary version and isn't valid anymore. Only applies to preprocess, preprocess2 is unaffected. ## Steps to Reproduce 1. Download and extract [oxygen_corrupted_map.zip](https://github.com/dita-ot/dita-ot/files/10401866/oxygen_corrupted_map.zip). 2. Run the following command: ``` dita -f html5 -i D:\oxygen_corrupted_map\topics\topic.dita -o D:\oxygen_corrupted_map\topics\out\topic-pdf -Dargs.resources=D:\oxygen_corrupted_map\map.ditamap -v ``` 4. The transformation finishes but the map links are broken. ## Copy of the error message, log file or stack trace The problem starts here: ``` debug-filter: [filter] Using Xerces grammar pool for DTD and schema caching. [filter] Processing file:/D:/oxygen_corrupted_map/map.ditamap to file:/D:/oxygen_corrupted_map/map.ditamap [filter] Processing file:/D:/oxygen_corrupted_map/topics/topic.dita to ... ``` ## Environment * DITA-OT version: 4.0.1 * Operating system and version: Windows * How did you run DITA-OT? `dita` command * Transformation type: HTML5
process
setting ditamap as args resources makes the process replacing it by its temporary version expected behavior if i transform a topic into and refer the ditamap as args resources for resolving keys i expect the orinal files to remain the same actual behavior the ditamap is modified by its temporary version and isn t valid anymore only applies to preprocess is unaffected steps to reproduce download and extract run the following command dita f i d oxygen corrupted map topics topic dita o d oxygen corrupted map topics out topic pdf dargs resources d oxygen corrupted map map ditamap v the transformation finishes but the map links are broken copy of the error message log file or stack trace the problem starts here debug filter using xerces grammar pool for dtd and schema caching processing file d oxygen corrupted map map ditamap to file d oxygen corrupted map map ditamap processing file d oxygen corrupted map topics topic dita to environment dita ot version operating system and version windows how did you run dita ot dita command transformation type
1
288,589
24,917,763,447
IssuesEvent
2022-10-30 16:00:11
dromara/hertzbeat
https://api.github.com/repos/dromara/hertzbeat
closed
[Task] <Unit Test Case> manager/controller/SummaryControllerTest.java
status: volunteer wanted unit test case
### Description Help us impl Unit Test For [manager/controller/SummaryControllerTest.java](https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/controller/SummaryControllerTest.java) You can learn and refer to the previous test cases impl. 1. controller example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/controller/AccountControllerTest.java 2. service example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/service/TagServiceTest.java 3. jpa sql dao example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/dao/MonitorDaoTest.java ### Task List - [ ] Impl Unit Test For [manager/controller/SummaryControllerTest.java](https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/controller/SummaryControllerTest.java)
1.0
[Task] <Unit Test Case> manager/controller/SummaryControllerTest.java - ### Description Help us impl Unit Test For [manager/controller/SummaryControllerTest.java](https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/controller/SummaryControllerTest.java) You can learn and refer to the previous test cases impl. 1. controller example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/controller/AccountControllerTest.java 2. service example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/service/TagServiceTest.java 3. jpa sql dao example unit case: https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/dao/MonitorDaoTest.java ### Task List - [ ] Impl Unit Test For [manager/controller/SummaryControllerTest.java](https://github.com/dromara/hertzbeat/blob/master/manager/src/test/java/com/usthe/manager/controller/SummaryControllerTest.java)
non_process
manager controller summarycontrollertest java description help us impl unit test for you can learn and refer to the previous test cases impl controller example unit case service example unit case jpa sql dao example unit case task list impl unit test for
0
108,675
23,645,082,064
IssuesEvent
2022-08-25 21:06:49
PurdueIEEE/boilerbooks
https://api.github.com/repos/PurdueIEEE/boilerbooks
opened
Migrate to Postgres
code enhancement
Postgres is cool. Also, supporting JSON would mean certain features can be added without needing to change vast quantities of database schema.
1.0
Migrate to Postgres - Postgres is cool. Also, supporting JSON would mean certain features can be added without needing to change vast quantities of database schema.
non_process
migrate to postgres postgres is cool also supporting json would mean certain features can be added without needing to change vast quantities of database schema
0
20,187
3,561,568,278
IssuesEvent
2016-01-23 21:52:19
javaslang/javaslang
https://api.github.com/repos/javaslang/javaslang
closed
Let single-valued Tests (OptionTest, EitherTest, ...) extend AbstractValueTest
design/refactoring
This has been already done in #1033 for Validation. There are still methods within AbstractValidationTest which need to be implemented. Search for `TODO`.
1.0
Let single-valued Tests (OptionTest, EitherTest, ...) extend AbstractValueTest - This has been already done in #1033 for Validation. There are still methods within AbstractValidationTest which need to be implemented. Search for `TODO`.
non_process
let single valued tests optiontest eithertest extend abstractvaluetest this has been already done in for validation there are still methods within abstractvalidationtest which need to be implemented search for todo
0
1,006
2,811,979,385
IssuesEvent
2015-05-18 04:01:43
HabitRPG/habitrpg
https://api.github.com/repos/HabitRPG/habitrpg
closed
TypeError: Cannot read property 'value' of undefined on api.wizard.fireball.cast
API classes performance
``` TypeError: Cannot read property 'value' of undefined at api.spells.wizard.fireball.cast (/app/node_modules/habitrpg-shared/script/content.coffee:1049:17) at Object.spell.cast (/app/node_modules/habitrpg-shared/script/content.coffee:1317:9) at api.cast (/app/src/controllers/user.js:386:13) at /app/node_modules/swagger-node-express/Common/node/swagger.js:392:11 at callbacks (/app/node_modules/express/lib/router/index.js:164:37) at api.cron (/app/src/controllers/user.js:205:24) at callbacks (/app/node_modules/express/lib/router/index.js:164:37) at Promise.<anonymous> (/app/src/controllers/auth.js:33:12) at Promise.<anonymous> (/app/node_modules/mongoose/node_modules/mpromise/lib/promise.js:177:8) at Promise.EventEmitter.emit (events.js:95:17) at Promise.emit (/app/node_modules/mongoose/node_modules/mpromise/lib/promise.js:84:38) at Promise.fulfill (/app/node_modules/mongoose/node_modules/mpromise/lib/promise.js:97:20) at /app/node_modules/mongoose/lib/query.js:1386:13 at model.Document.init (/app/node_modules/mongoose/lib/document.js:250:11) at completeOne (/app/node_modules/mongoose/lib/query.js:1384:10) at Object.cb (/app/node_modules/mongoose/lib/query.js:1143:11) at Object._onImmediate (/app/node_modules/mongoose/node_modules/mquery/lib/utils.js:126:16) at processImmediate [as _immediateCallback] (timers.js:330:15) ---------------------------- originalUrl: /api/v2/user/class/cast/fireball?targetType=task&targetId=b5140337-5cab-49b9-a1f0-8ee82cf2741e ``` <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/1396596-typeerror-cannot-read-property-value-of-undefined-on-api-wizard-fireball-cast?utm_campaign=plugin&utm_content=tracker%2F68393&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F68393&utm_medium=issues&utm_source=github). </bountysource-plugin>
True
TypeError: Cannot read property 'value' of undefined on api.wizard.fireball.cast - ``` TypeError: Cannot read property 'value' of undefined at api.spells.wizard.fireball.cast (/app/node_modules/habitrpg-shared/script/content.coffee:1049:17) at Object.spell.cast (/app/node_modules/habitrpg-shared/script/content.coffee:1317:9) at api.cast (/app/src/controllers/user.js:386:13) at /app/node_modules/swagger-node-express/Common/node/swagger.js:392:11 at callbacks (/app/node_modules/express/lib/router/index.js:164:37) at api.cron (/app/src/controllers/user.js:205:24) at callbacks (/app/node_modules/express/lib/router/index.js:164:37) at Promise.<anonymous> (/app/src/controllers/auth.js:33:12) at Promise.<anonymous> (/app/node_modules/mongoose/node_modules/mpromise/lib/promise.js:177:8) at Promise.EventEmitter.emit (events.js:95:17) at Promise.emit (/app/node_modules/mongoose/node_modules/mpromise/lib/promise.js:84:38) at Promise.fulfill (/app/node_modules/mongoose/node_modules/mpromise/lib/promise.js:97:20) at /app/node_modules/mongoose/lib/query.js:1386:13 at model.Document.init (/app/node_modules/mongoose/lib/document.js:250:11) at completeOne (/app/node_modules/mongoose/lib/query.js:1384:10) at Object.cb (/app/node_modules/mongoose/lib/query.js:1143:11) at Object._onImmediate (/app/node_modules/mongoose/node_modules/mquery/lib/utils.js:126:16) at processImmediate [as _immediateCallback] (timers.js:330:15) ---------------------------- originalUrl: /api/v2/user/class/cast/fireball?targetType=task&targetId=b5140337-5cab-49b9-a1f0-8ee82cf2741e ``` <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/1396596-typeerror-cannot-read-property-value-of-undefined-on-api-wizard-fireball-cast?utm_campaign=plugin&utm_content=tracker%2F68393&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F68393&utm_medium=issues&utm_source=github). </bountysource-plugin>
non_process
typeerror cannot read property value of undefined on api wizard fireball cast typeerror cannot read property value of undefined at api spells wizard fireball cast app node modules habitrpg shared script content coffee at object spell cast app node modules habitrpg shared script content coffee at api cast app src controllers user js at app node modules swagger node express common node swagger js at callbacks app node modules express lib router index js at api cron app src controllers user js at callbacks app node modules express lib router index js at promise app src controllers auth js at promise app node modules mongoose node modules mpromise lib promise js at promise eventemitter emit events js at promise emit app node modules mongoose node modules mpromise lib promise js at promise fulfill app node modules mongoose node modules mpromise lib promise js at app node modules mongoose lib query js at model document init app node modules mongoose lib document js at completeone app node modules mongoose lib query js at object cb app node modules mongoose lib query js at object onimmediate app node modules mongoose node modules mquery lib utils js at processimmediate timers js originalurl api user class cast fireball targettype task targetid want to back this issue we accept bounties via
0
133
2,572,658,981
IssuesEvent
2015-02-11 00:11:43
tinkerpop/tinkerpop3
https://api.github.com/repos/tinkerpop/tinkerpop3
closed
ConjunctionStep.getHasContainers()
enhancement process
To make vendor life easy, we should provide `ConjunctionStep.getHasContainers()` and `ConjunctionStep.isPurelyHasSteps()`. ```java List<Conjunction,List> hasContainers = conjunectionStep.getHasContainers() ``` This would return something like: ```js [ and : [ has[name,eq,'marko'], or : [ and : [ has[age,gt,30], has[label,eq,person] ], ], ], ] ```
1.0
ConjunctionStep.getHasContainers() - To make vendor life easy, we should provide `ConjunctionStep.getHasContainers()` and `ConjunctionStep.isPurelyHasSteps()`. ```java List<Conjunction,List> hasContainers = conjunectionStep.getHasContainers() ``` This would return something like: ```js [ and : [ has[name,eq,'marko'], or : [ and : [ has[age,gt,30], has[label,eq,person] ], ], ], ] ```
process
conjunctionstep gethascontainers to make vendor life easy we should provide conjunctionstep gethascontainers and conjunctionstep ispurelyhassteps java list hascontainers conjunectionstep gethascontainers this would return something like js and has or and has has
1
9,246
12,279,530,635
IssuesEvent
2020-05-08 12:24:10
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
Change label GO:0043903 regulation of symbiosis, encompassing mutualism through parasitism to regulation of interspecies interaction between organisms
multi-species process
GO:0043903 regulation of symbiosis, encompassing mutualism through parasitism to regulation of interspecies interaction between organisms This is required for #19027 @mgiglio99 Is that OK for you ? Thanks, Pascale
1.0
Change label GO:0043903 regulation of symbiosis, encompassing mutualism through parasitism to regulation of interspecies interaction between organisms - GO:0043903 regulation of symbiosis, encompassing mutualism through parasitism to regulation of interspecies interaction between organisms This is required for #19027 @mgiglio99 Is that OK for you ? Thanks, Pascale
process
change label go regulation of symbiosis encompassing mutualism through parasitism to regulation of interspecies interaction between organisms go regulation of symbiosis encompassing mutualism through parasitism to regulation of interspecies interaction between organisms this is required for is that ok for you thanks pascale
1
19,822
26,211,117,917
IssuesEvent
2023-01-04 06:34:50
vesoft-inc/nebula
https://api.github.com/repos/vesoft-inc/nebula
reopened
UNION bug
type/bug wontfix severity/none process/fixed affects/none
**Please check the FAQ documentation before raising an issue** <!-- Please check the [FAQ](https://docs.nebula-graph.com.cn/master/20.appendix/0.FAQ/) documentation and old issues before raising an issue in case someone has asked the same question that you are asking. --> **Describe the bug (__required__)** The following sentence reports a syntax error. ``` UNWIND [1,2] AS a RETURN a UNION ALL RETURN 2 AS a ``` While the following could executed successfully: ``` UNWIND [1,2] AS a RETURN a UNION ALL UNWIND [2] AS a RETURN a ``` <!-- A clear and concise description of what the bug is. --> **Your Environments (__required__)** * OS: `uname -a` * Compiler: `g++ --version` or `clang++ --version` * CPU: `lscpu` * Commit id (e.g. `a3ffc7d8`) **How To Reproduce(__required__)** Steps to reproduce the behavior: 1. Step 1 2. Step 2 3. Step 3 **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> **Additional context** <!-- Provide logs and configs, or any other context to trace the problem. -->
1.0
UNION bug - **Please check the FAQ documentation before raising an issue** <!-- Please check the [FAQ](https://docs.nebula-graph.com.cn/master/20.appendix/0.FAQ/) documentation and old issues before raising an issue in case someone has asked the same question that you are asking. --> **Describe the bug (__required__)** The following sentence reports a syntax error. ``` UNWIND [1,2] AS a RETURN a UNION ALL RETURN 2 AS a ``` While the following could executed successfully: ``` UNWIND [1,2] AS a RETURN a UNION ALL UNWIND [2] AS a RETURN a ``` <!-- A clear and concise description of what the bug is. --> **Your Environments (__required__)** * OS: `uname -a` * Compiler: `g++ --version` or `clang++ --version` * CPU: `lscpu` * Commit id (e.g. `a3ffc7d8`) **How To Reproduce(__required__)** Steps to reproduce the behavior: 1. Step 1 2. Step 2 3. Step 3 **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> **Additional context** <!-- Provide logs and configs, or any other context to trace the problem. -->
process
union bug please check the faq documentation before raising an issue describe the bug required the following sentence reports a syntax error unwind as a return a union all return as a while the following could executed successfully unwind as a return a union all unwind as a return a your environments required os uname a compiler g version or clang version cpu lscpu commit id e g how to reproduce required steps to reproduce the behavior step step step expected behavior additional context
1
14,232
17,151,087,429
IssuesEvent
2021-07-13 20:43:42
googleapis/google-cloud-go
https://api.github.com/repos/googleapis/google-cloud-go
closed
compute: fix bugs in alpha release
api: compute type: process
The `compute/apiv1` alpha release in v0.86.0 has a few known bugs. They are as follows: - [x] missing scheme on default endpoint - [x] `GET`/`DELETE` requests are generated with a body - [x] remove `EmitUnpopulated` encoding option First discovered post-release and then reported in #4377 by users. This is a tracking bug.
1.0
compute: fix bugs in alpha release - The `compute/apiv1` alpha release in v0.86.0 has a few known bugs. They are as follows: - [x] missing scheme on default endpoint - [x] `GET`/`DELETE` requests are generated with a body - [x] remove `EmitUnpopulated` encoding option First discovered post-release and then reported in #4377 by users. This is a tracking bug.
process
compute fix bugs in alpha release the compute alpha release in has a few known bugs they are as follows missing scheme on default endpoint get delete requests are generated with a body remove emitunpopulated encoding option first discovered post release and then reported in by users this is a tracking bug
1