Unnamed: 0
int64
1
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
3
438
labels
stringlengths
4
308
body
stringlengths
7
254k
index
stringclasses
7 values
text_combine
stringlengths
96
254k
label
stringclasses
2 values
text
stringlengths
96
246k
binary_label
int64
0
1
170,886
13,207,653,620
IssuesEvent
2020-08-15 00:00:15
OneCDOnly/sherpa
https://api.github.com/repos/OneCDOnly/sherpa
closed
sherpa should permit a single instance only
enhancement testing ...
Don't want a second instance messing things up. :wink:
1.0
sherpa should permit a single instance only - Don't want a second instance messing things up. :wink:
non_main
sherpa should permit a single instance only don t want a second instance messing things up wink
0
721,505
24,829,462,205
IssuesEvent
2022-10-26 01:07:41
AY2223S1-CS2113-T17-1/tp
https://api.github.com/repos/AY2223S1-CS2113-T17-1/tp
closed
Change boarding gate number - both flightlist & passenger list
priority.Medium
modify gate_number FLIGHT_NUM PREV_NUM NEW_NUM PREV_NUM must be a existing number. NEW_NUM cannot be an existing number. Due Date: 23rd Oct 2022 (Sunday)
1.0
Change boarding gate number - both flightlist & passenger list - modify gate_number FLIGHT_NUM PREV_NUM NEW_NUM PREV_NUM must be a existing number. NEW_NUM cannot be an existing number. Due Date: 23rd Oct 2022 (Sunday)
non_main
change boarding gate number both flightlist passenger list modify gate number flight num prev num new num prev num must be a existing number new num cannot be an existing number due date oct sunday
0
1,792
6,721,542,962
IssuesEvent
2017-10-16 12:10:42
Chainsawkitten/LargeGameProjectEngine
https://api.github.com/repos/Chainsawkitten/LargeGameProjectEngine
closed
Decide assets for game
Architecture Asset
A document listing all assets to be used in game, highlighting those that are needed to be done by this sprint.
1.0
Decide assets for game - A document listing all assets to be used in game, highlighting those that are needed to be done by this sprint.
non_main
decide assets for game a document listing all assets to be used in game highlighting those that are needed to be done by this sprint
0
12,677
14,970,622,060
IssuesEvent
2021-01-27 19:53:32
Leaflet/Leaflet
https://api.github.com/repos/Leaflet/Leaflet
closed
marker.bindPopup() - version 1.7.1 only; on Mac Safari only - click event isn't recognized properly
bug compatibility needs investigation
An event added via `marker.bindpopup()` - version 1.7.1 only using a Mac Safari (version 14 and 13) browser only - will **only** be recognized if one makes a _long mouse-click_ of about 1s like a long tap event. If one adds such an event via `marker.on('click', function(e) {this.openPopup();});` it works properly.
True
marker.bindPopup() - version 1.7.1 only; on Mac Safari only - click event isn't recognized properly - An event added via `marker.bindpopup()` - version 1.7.1 only using a Mac Safari (version 14 and 13) browser only - will **only** be recognized if one makes a _long mouse-click_ of about 1s like a long tap event. If one adds such an event via `marker.on('click', function(e) {this.openPopup();});` it works properly.
non_main
marker bindpopup version only on mac safari only click event isn t recognized properly an event added via marker bindpopup version only using a mac safari version and browser only will only be recognized if one makes a long mouse click of about like a long tap event if one adds such an event via marker on click function e this openpopup it works properly
0
5,295
26,761,214,212
IssuesEvent
2023-01-31 07:04:03
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
No autocomplete or debugging on targets using custom rules
type: bug product: GoLand topic: debugging awaiting-maintainer
#### Description of the issue. Please be specific. I'm trying to run and debug bazel targets in my workspace which use custom rules also defined in my workspace. I can execute these targets manually from the command line and can hardcode them when setting up a run configuration, but I don't get autocomplete when trying to find the target during configuration editing. ![image](https://user-images.githubusercontent.com/1986950/87694263-f71c4d80-c75b-11ea-9515-5fee326e01b2.png) Note in this screenshot how I've hardcoded `//helm/api:staging` and that it doesn't show up in the autocomplete list when adding a second target. I can run this target, but the UI doesn't let me debug it which I guess is part of the same problem. ![image](https://user-images.githubusercontent.com/1986950/87694469-3fd40680-c75c-11ea-97d9-1a8e269e2c93.png) This target uses a custom rule defined in a `helm.bzl` file located in the same workspace. Looking through the docs, it seems like the bazel plugin [ignores some rules](https://ij.bazel.build/docs/project-views.html#derive_targets_from_directories) which might contribute to this. #### What's the simplest set of steps to reproduce this issue? Please provide an example project, if possible. I'm still working on a simple repro and will update this question once I have it figured out. #### Version information GoLand: 2020.1.4 Platform: Mac OS X 10.15.5 Bazel plugin: 2020.06.25.0.3 Bazel: 2.2.0
True
No autocomplete or debugging on targets using custom rules - #### Description of the issue. Please be specific. I'm trying to run and debug bazel targets in my workspace which use custom rules also defined in my workspace. I can execute these targets manually from the command line and can hardcode them when setting up a run configuration, but I don't get autocomplete when trying to find the target during configuration editing. ![image](https://user-images.githubusercontent.com/1986950/87694263-f71c4d80-c75b-11ea-9515-5fee326e01b2.png) Note in this screenshot how I've hardcoded `//helm/api:staging` and that it doesn't show up in the autocomplete list when adding a second target. I can run this target, but the UI doesn't let me debug it which I guess is part of the same problem. ![image](https://user-images.githubusercontent.com/1986950/87694469-3fd40680-c75c-11ea-97d9-1a8e269e2c93.png) This target uses a custom rule defined in a `helm.bzl` file located in the same workspace. Looking through the docs, it seems like the bazel plugin [ignores some rules](https://ij.bazel.build/docs/project-views.html#derive_targets_from_directories) which might contribute to this. #### What's the simplest set of steps to reproduce this issue? Please provide an example project, if possible. I'm still working on a simple repro and will update this question once I have it figured out. #### Version information GoLand: 2020.1.4 Platform: Mac OS X 10.15.5 Bazel plugin: 2020.06.25.0.3 Bazel: 2.2.0
main
no autocomplete or debugging on targets using custom rules description of the issue please be specific i m trying to run and debug bazel targets in my workspace which use custom rules also defined in my workspace i can execute these targets manually from the command line and can hardcode them when setting up a run configuration but i don t get autocomplete when trying to find the target during configuration editing note in this screenshot how i ve hardcoded helm api staging and that it doesn t show up in the autocomplete list when adding a second target i can run this target but the ui doesn t let me debug it which i guess is part of the same problem this target uses a custom rule defined in a helm bzl file located in the same workspace looking through the docs it seems like the bazel plugin which might contribute to this what s the simplest set of steps to reproduce this issue please provide an example project if possible i m still working on a simple repro and will update this question once i have it figured out version information goland platform mac os x bazel plugin bazel
1
738
4,347,746,106
IssuesEvent
2016-07-29 20:40:49
coniks-sys/coniks-ref-implementation
https://api.github.com/repos/coniks-sys/coniks-ref-implementation
opened
Restructure code as in coniks-go repo
maintainability
The existing code base should be reorganized into the following packages: - crypto - client - merkletree - keyserver - protocol - utils These packages should be considered for the future: - bots for third-party account verification - storage for persistent storage backend hooks
True
Restructure code as in coniks-go repo - The existing code base should be reorganized into the following packages: - crypto - client - merkletree - keyserver - protocol - utils These packages should be considered for the future: - bots for third-party account verification - storage for persistent storage backend hooks
main
restructure code as in coniks go repo the existing code base should be reorganized into the following packages crypto client merkletree keyserver protocol utils these packages should be considered for the future bots for third party account verification storage for persistent storage backend hooks
1
5,756
30,513,559,197
IssuesEvent
2023-07-18 23:37:35
VoronDesign/VoronUsers
https://api.github.com/repos/VoronDesign/VoronUsers
closed
350mm issue with V2.4 Skirt switch mod by tayto-chip
Action required by maintainers
Sorry @tayto-gp there is no user with the name tayto-chip on discord which the readme says to contact with issues. The 350mm side a skirt won't work the nub is too long. ![IMG_8183](https://github.com/VoronDesign/VoronUsers/assets/1868661/ab194d63-67b9-4b9c-99e2-b0c94e56e1ea) ![IMG_8184](https://github.com/VoronDesign/VoronUsers/assets/1868661/46effc7f-29eb-40a2-a077-355e74581478) ![IMG_8185](https://github.com/VoronDesign/VoronUsers/assets/1868661/e004f1a7-148c-48a1-9997-2c842f3a10ac)
True
350mm issue with V2.4 Skirt switch mod by tayto-chip - Sorry @tayto-gp there is no user with the name tayto-chip on discord which the readme says to contact with issues. The 350mm side a skirt won't work the nub is too long. ![IMG_8183](https://github.com/VoronDesign/VoronUsers/assets/1868661/ab194d63-67b9-4b9c-99e2-b0c94e56e1ea) ![IMG_8184](https://github.com/VoronDesign/VoronUsers/assets/1868661/46effc7f-29eb-40a2-a077-355e74581478) ![IMG_8185](https://github.com/VoronDesign/VoronUsers/assets/1868661/e004f1a7-148c-48a1-9997-2c842f3a10ac)
main
issue with skirt switch mod by tayto chip sorry tayto gp there is no user with the name tayto chip on discord which the readme says to contact with issues the side a skirt won t work the nub is too long
1
92,669
10,760,936,158
IssuesEvent
2019-10-31 19:40:12
Automattic/simplenote-electron
https://api.github.com/repos/Automattic/simplenote-electron
closed
Add Evernote notebook as a tag
documentation
<!-- Thanks for contributing to Simplenote! Pick a clear title ("Note editor: emojis not displaying correctly") and proceed. Please review the FAQs before submitting an issue: https://github.com/Automattic/simplenote-electron/labels/FAQ Mac users: Does your Simplenote app have a file size of less than 50 MB? Then you are using simplenote-macos, not simplenote-electron. Please post your issue here: https://github.com/Automattic/simplenote-macos --> #### Steps to reproduce 1. Export all notes from Evernote across notebooks via Windows desktop app 2. Import Evernote.enex file into Simplenote #### What I expected Notes would be tagged with the Evernote notebook that they were in #### What happened instead No notebook tag was applied Looking in the `.enex` files this might be an issue with the Evernote export that it doesn't include the notebook. If there is nothing that can be done, it might be worth noting in your help that you should pre-tag everything with the notebook name in Evernote before exporting (which is quite easy to do) as it's a big hassle to delete and re-import 1000s of notes in Simplenote #### Simplenote version <!-- Here's the version number of our latest release: https://github.com/Automattic/simplenote-electron/releases/latest --> v1.3.4 #### OS version Windows 10 #### Screenshot / Video <!-- PLEASE NOTE - These comments won't show up when you submit the issue. - Everything is optional, but try to add as many details as possible. - If requesting a new feature, explain why you'd like to see it added. -->
1.0
Add Evernote notebook as a tag - <!-- Thanks for contributing to Simplenote! Pick a clear title ("Note editor: emojis not displaying correctly") and proceed. Please review the FAQs before submitting an issue: https://github.com/Automattic/simplenote-electron/labels/FAQ Mac users: Does your Simplenote app have a file size of less than 50 MB? Then you are using simplenote-macos, not simplenote-electron. Please post your issue here: https://github.com/Automattic/simplenote-macos --> #### Steps to reproduce 1. Export all notes from Evernote across notebooks via Windows desktop app 2. Import Evernote.enex file into Simplenote #### What I expected Notes would be tagged with the Evernote notebook that they were in #### What happened instead No notebook tag was applied Looking in the `.enex` files this might be an issue with the Evernote export that it doesn't include the notebook. If there is nothing that can be done, it might be worth noting in your help that you should pre-tag everything with the notebook name in Evernote before exporting (which is quite easy to do) as it's a big hassle to delete and re-import 1000s of notes in Simplenote #### Simplenote version <!-- Here's the version number of our latest release: https://github.com/Automattic/simplenote-electron/releases/latest --> v1.3.4 #### OS version Windows 10 #### Screenshot / Video <!-- PLEASE NOTE - These comments won't show up when you submit the issue. - Everything is optional, but try to add as many details as possible. - If requesting a new feature, explain why you'd like to see it added. -->
non_main
add evernote notebook as a tag thanks for contributing to simplenote pick a clear title note editor emojis not displaying correctly and proceed please review the faqs before submitting an issue mac users does your simplenote app have a file size of less than mb then you are using simplenote macos not simplenote electron please post your issue here steps to reproduce export all notes from evernote across notebooks via windows desktop app import evernote enex file into simplenote what i expected notes would be tagged with the evernote notebook that they were in what happened instead no notebook tag was applied looking in the enex files this might be an issue with the evernote export that it doesn t include the notebook if there is nothing that can be done it might be worth noting in your help that you should pre tag everything with the notebook name in evernote before exporting which is quite easy to do as it s a big hassle to delete and re import of notes in simplenote simplenote version here s the version number of our latest release os version windows screenshot video please note these comments won t show up when you submit the issue everything is optional but try to add as many details as possible if requesting a new feature explain why you d like to see it added
0
320,952
23,832,855,381
IssuesEvent
2022-09-06 00:31:54
farhanfadila1717/slide_countdown
https://api.github.com/repos/farhanfadila1717/slide_countdown
opened
Readme Add documentation `start`, `stop`, `changeDuration`.
documentation
Ovveride StreamDuration example for `start`, `stop`, `changeDuration` countdown.
1.0
Readme Add documentation `start`, `stop`, `changeDuration`. - Ovveride StreamDuration example for `start`, `stop`, `changeDuration` countdown.
non_main
readme add documentation start stop changeduration ovveride streamduration example for start stop changeduration countdown
0
3,444
13,212,219,958
IssuesEvent
2020-08-16 05:29:51
ansible/ansible
https://api.github.com/repos/ansible/ansible
closed
terraform: Plan file doesn't seen in next steps
affects_2.8 bot_closed cloud collection collection:community.general module needs_collection_redirect needs_maintainer needs_triage support:community
Continue of https://github.com/ansible/ansible/issues/39611 Fix doesn't work (https://github.com/ansible/ansible/commit/9a607283aafce8f1eb424df6b4c567095844bfd7) At this moment i specify plan file, but next step doesn't see it ##### COMPONENT NAME terraform Ansible version: ``` ansible 2.8.3 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/aermakov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0] ``` Can you help to handle it? @mohitkumarsharmaflux7 @ryansb Code: ``` - name: Check plan and created elements terraform: project_path: '{{ playbook_dir }}/roles/terraform/templates/terraform' plan_file: '{{ playbook_dir }}/roles/terraform/templates/terraform/okd.tfplan' state: planned force_init: true - name: Create Vms and prepare inventoryfile terraform: project_path: '{{ playbook_dir }}/roles/terraform/templates/terraform' state: present ```
True
terraform: Plan file doesn't seen in next steps - Continue of https://github.com/ansible/ansible/issues/39611 Fix doesn't work (https://github.com/ansible/ansible/commit/9a607283aafce8f1eb424df6b4c567095844bfd7) At this moment i specify plan file, but next step doesn't see it ##### COMPONENT NAME terraform Ansible version: ``` ansible 2.8.3 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/aermakov/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/dist-packages/ansible executable location = /usr/bin/ansible python version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0] ``` Can you help to handle it? @mohitkumarsharmaflux7 @ryansb Code: ``` - name: Check plan and created elements terraform: project_path: '{{ playbook_dir }}/roles/terraform/templates/terraform' plan_file: '{{ playbook_dir }}/roles/terraform/templates/terraform/okd.tfplan' state: planned force_init: true - name: Create Vms and prepare inventoryfile terraform: project_path: '{{ playbook_dir }}/roles/terraform/templates/terraform' state: present ```
main
terraform plan file doesn t seen in next steps continue of fix doesn t work at this moment i specify plan file but next step doesn t see it component name terraform ansible version ansible config file etc ansible ansible cfg configured module search path ansible python module location usr lib dist packages ansible executable location usr bin ansible python version default nov can you help to handle it ryansb code name check plan and created elements terraform project path playbook dir roles terraform templates terraform plan file playbook dir roles terraform templates terraform okd tfplan state planned force init true name create vms and prepare inventoryfile terraform project path playbook dir roles terraform templates terraform state present
1
198,247
22,620,974,484
IssuesEvent
2022-06-30 06:16:55
ioana-nicolae/testing-functionality
https://api.github.com/repos/ioana-nicolae/testing-functionality
opened
CVE-2021-43797 (Medium) detected in netty-codec-http-4.1.48.Final.jar
security vulnerability
## CVE-2021-43797 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.48.Final.jar</b></p></summary> <p>Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers and clients.</p> <p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec-http/4.1.48.Final/netty-codec-http-4.1.48.Final.jar</p> <p> Dependency Hierarchy: - aws-java-sdk-1.11.856.jar (Root Library) - aws-java-sdk-kinesisvideo-1.11.856.jar - :x: **netty-codec-http-4.1.48.Final.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ioana-nicolae/testing-functionality/commit/b9cf710c94adea695ef39d08725a0ef0851297b6">b9cf710c94adea695ef39d08725a0ef0851297b6</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. Netty prior to version 4.1.71.Final skips control chars when they are present at the beginning / end of the header name. It should instead fail fast as these are not allowed by the spec and could lead to HTTP request smuggling. Failing to do the validation might cause netty to "sanitize" header names before it forward these to another remote system when used as proxy. This remote system can't see the invalid usage anymore, and therefore does not do the validation itself. Users should upgrade to version 4.1.71.Final. Mend Note: After conducting further research, Mend has determined that all versions of netty up to version 4.1.71.Final are vulnerable to CVE-2021-43797. <p>Publish Date: 2021-12-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43797>CVE-2021-43797</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="CVE-2021-43797">CVE-2021-43797</a></p> <p>Release Date: 2021-12-09</p> <p>Fix Resolution (io.netty:netty-codec-http): 4.1.71.Final</p> <p>Direct dependency fix Resolution (com.amazonaws:aws-java-sdk): 1.11.875</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2021-43797 (Medium) detected in netty-codec-http-4.1.48.Final.jar - ## CVE-2021-43797 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.48.Final.jar</b></p></summary> <p>Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers and clients.</p> <p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p> <p>Path to dependency file: /pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-codec-http/4.1.48.Final/netty-codec-http-4.1.48.Final.jar</p> <p> Dependency Hierarchy: - aws-java-sdk-1.11.856.jar (Root Library) - aws-java-sdk-kinesisvideo-1.11.856.jar - :x: **netty-codec-http-4.1.48.Final.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ioana-nicolae/testing-functionality/commit/b9cf710c94adea695ef39d08725a0ef0851297b6">b9cf710c94adea695ef39d08725a0ef0851297b6</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. Netty prior to version 4.1.71.Final skips control chars when they are present at the beginning / end of the header name. It should instead fail fast as these are not allowed by the spec and could lead to HTTP request smuggling. Failing to do the validation might cause netty to "sanitize" header names before it forward these to another remote system when used as proxy. This remote system can't see the invalid usage anymore, and therefore does not do the validation itself. Users should upgrade to version 4.1.71.Final. Mend Note: After conducting further research, Mend has determined that all versions of netty up to version 4.1.71.Final are vulnerable to CVE-2021-43797. <p>Publish Date: 2021-12-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-43797>CVE-2021-43797</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="CVE-2021-43797">CVE-2021-43797</a></p> <p>Release Date: 2021-12-09</p> <p>Fix Resolution (io.netty:netty-codec-http): 4.1.71.Final</p> <p>Direct dependency fix Resolution (com.amazonaws:aws-java-sdk): 1.11.875</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_main
cve medium detected in netty codec http final jar cve medium severity vulnerability vulnerable library netty codec http final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository io netty netty codec http final netty codec http final jar dependency hierarchy aws java sdk jar root library aws java sdk kinesisvideo jar x netty codec http final jar vulnerable library found in head commit a href found in base branch main vulnerability details netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers clients netty prior to version final skips control chars when they are present at the beginning end of the header name it should instead fail fast as these are not allowed by the spec and could lead to http request smuggling failing to do the validation might cause netty to sanitize header names before it forward these to another remote system when used as proxy this remote system can t see the invalid usage anymore and therefore does not do the validation itself users should upgrade to version final mend note after conducting further research mend has determined that all versions of netty up to version final are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin cve release date fix resolution io netty netty codec http final direct dependency fix resolution com amazonaws aws java sdk rescue worker helmet automatic remediation is available for this issue
0
19,562
14,236,968,118
IssuesEvent
2020-11-18 16:37:49
rokwire/safer-illinois-app
https://api.github.com/repos/rokwire/safer-illinois-app
closed
[USABILITY] - Add an external link icon to the Feedback button.
Type: Usability
**Describe the bug** Add an external link icon to the Feedback button. **To Reproduce** Steps to reproduce the behavior: 1. Download the Safer Illinois app. 2. Complete the initial onboarding process and logged in as a University student. 3. COVID 19 screen is displayed. 4. Tap on the Settings icon. 5. On the Settings screen, tap on Submit Feedback button **Actual Result** The submit screen is not part of the Safer app. Adding an external link icon helps the user to understand as the feedback screen is an external screen. **Expected behavior** Add an external link icon to the Feedback button.
True
[USABILITY] - Add an external link icon to the Feedback button. - **Describe the bug** Add an external link icon to the Feedback button. **To Reproduce** Steps to reproduce the behavior: 1. Download the Safer Illinois app. 2. Complete the initial onboarding process and logged in as a University student. 3. COVID 19 screen is displayed. 4. Tap on the Settings icon. 5. On the Settings screen, tap on Submit Feedback button **Actual Result** The submit screen is not part of the Safer app. Adding an external link icon helps the user to understand as the feedback screen is an external screen. **Expected behavior** Add an external link icon to the Feedback button.
non_main
add an external link icon to the feedback button describe the bug add an external link icon to the feedback button to reproduce steps to reproduce the behavior download the safer illinois app complete the initial onboarding process and logged in as a university student covid screen is displayed tap on the settings icon on the settings screen tap on submit feedback button actual result the submit screen is not part of the safer app adding an external link icon helps the user to understand as the feedback screen is an external screen expected behavior add an external link icon to the feedback button
0
185,512
15,024,068,474
IssuesEvent
2021-02-01 19:06:29
mermaid-js/mermaid
https://api.github.com/repos/mermaid-js/mermaid
closed
Update NPM readme
Area: Documentation Status: Approved Type: Other
As the Readme got reworked with #1045 it also needs to be updated on [npm](https://www.npmjs.com/package/mermaid). The images are broken currently because they are referenced by a relative path. They might need to be replaced with an absolute url.
1.0
Update NPM readme - As the Readme got reworked with #1045 it also needs to be updated on [npm](https://www.npmjs.com/package/mermaid). The images are broken currently because they are referenced by a relative path. They might need to be replaced with an absolute url.
non_main
update npm readme as the readme got reworked with it also needs to be updated on the images are broken currently because they are referenced by a relative path they might need to be replaced with an absolute url
0
156,876
24,626,127,855
IssuesEvent
2022-10-16 14:45:08
dotnet/efcore
https://api.github.com/repos/dotnet/efcore
closed
Why does EF Core pluralize table names by default?
closed-by-design customer-reported
As described in this post: https://entityframeworkcore.com/knowledge-base/37493095/entity-framework-core-rc2-table-name-pluralization I'm wondering why the EF Core team took the decision to use the name of the DbSet property for the SQL table name by default? This is generally going to result in plural table names, as that is the appropriate name for the DbSet properties. I thought this was considered bad practice, and that SQL table named should be singular - why this default?
1.0
Why does EF Core pluralize table names by default? - As described in this post: https://entityframeworkcore.com/knowledge-base/37493095/entity-framework-core-rc2-table-name-pluralization I'm wondering why the EF Core team took the decision to use the name of the DbSet property for the SQL table name by default? This is generally going to result in plural table names, as that is the appropriate name for the DbSet properties. I thought this was considered bad practice, and that SQL table named should be singular - why this default?
non_main
why does ef core pluralize table names by default as described in this post i m wondering why the ef core team took the decision to use the name of the dbset property for the sql table name by default this is generally going to result in plural table names as that is the appropriate name for the dbset properties i thought this was considered bad practice and that sql table named should be singular why this default
0
514,899
14,946,342,387
IssuesEvent
2021-01-26 06:34:38
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
mail.google.com - site is not usable
browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
<!-- @browser: Firefox 85.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/66239 --> **URL**: https://mail.google.com/mail/u/0/?tab=wm **Browser / Version**: Firefox 85.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Internet Explorer **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: the page loads, but it appears blank <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/30a92310-54f8-40c8-af8b-cc5c99c5c5d1.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210118153634</li><li>channel: release</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2021/1/c68eefd5-7ee3-4a43-8121-4199b0e090d2) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
mail.google.com - site is not usable - <!-- @browser: Firefox 85.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/66239 --> **URL**: https://mail.google.com/mail/u/0/?tab=wm **Browser / Version**: Firefox 85.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Internet Explorer **Problem type**: Site is not usable **Description**: Page not loading correctly **Steps to Reproduce**: the page loads, but it appears blank <details> <summary>View the screenshot</summary> <img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/30a92310-54f8-40c8-af8b-cc5c99c5c5d1.jpg"> </details> <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210118153634</li><li>channel: release</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2021/1/c68eefd5-7ee3-4a43-8121-4199b0e090d2) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_main
mail google com site is not usable url browser version firefox operating system windows tested another browser yes internet explorer problem type site is not usable description page not loading correctly steps to reproduce the page loads but it appears blank view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
0
3,223
12,367,096,845
IssuesEvent
2020-05-18 11:41:20
pace/bricks
https://api.github.com/repos/pace/bricks
closed
Add log breadcrumbs to log.handleError()
EST::Hours S::Ready T::Maintainance
Currently, breadcrumbs are only attached when an http request is aborted with an error or panic. Sometimes we want to use handleError to report a message to sentry without aborting the request. Breadcrumbs should be added there as well.
True
Add log breadcrumbs to log.handleError() - Currently, breadcrumbs are only attached when an http request is aborted with an error or panic. Sometimes we want to use handleError to report a message to sentry without aborting the request. Breadcrumbs should be added there as well.
main
add log breadcrumbs to log handleerror currently breadcrumbs are only attached when an http request is aborted with an error or panic sometimes we want to use handleerror to report a message to sentry without aborting the request breadcrumbs should be added there as well
1
1,827
6,577,345,978
IssuesEvent
2017-09-12 00:16:00
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
ec2_asg ignores replace_instances if lc_check is true
affects_2.0 aws bug_report cloud waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> cloud/amazon/ec2_asg ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.0.0.1 config file = <snip> configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> N/A ##### SUMMARY <!--- Explain the problem briefly --> Running `ec2_asg` with `replace_instances` set to a single instance and `lc_check` set to `yes` against an ASG with multiple instances causes it to ignore `replace_instances` and replace a random instance in the ASG. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> - Spin up an ASG with min, max and desired > 1 - Change the launch configuration for the ASG - Run `ec2_asg`, specifying a single instance for `replace_instances`, and `lc_check` = `yes` It will choose a random instance from the instances in the ASG which have the old LC. This seems to stem from [these lines](https://github.com/ansible/ansible-modules-core/blob/7314cc3867eb90bc1c098e29265ae48670ad35b1/cloud/amazon/ec2_asg.py#L628-L633) ignoring the passed-in `initial_instances` and instead producing its own list of instances to be terminated. <!--- Paste example playbooks or commands between quotes below --> ``` ec2_asg: lc_check: yes replace_batch_size: 1 replace_instances: my_instance_id name: my_asg min_size: 3 max_size: 3 desired_capacity: 3 launch_config_name: my_lc region: us-west-2 ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> It would spin up a new instance in the ASG, and then terminate the instance I specified above ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> It sometimes terminates the one I specify, other times it terminates a different one. <!--- Paste verbatim command output between quotes below --> ``` TASK [Cycling | ec2_asg | Cycle instance (only if its launch configuration differs from that of the ASG)] *** task path: cycle-asg-instance-with-status-check.yml:16 Wednesday 15 June 2016 15:31:18 +0000 (0:00:00.020) 0:04:47.913 ******** ESTABLISH LOCAL CONNECTION FOR USER: admin 127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736 )" ) 127.0.0.1 PUT /tmp/tmpTZaAaD TO /home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/ec2_asg 127.0.0.1 EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/ec2_asg; rm -rf "/home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/" > /dev/null 2>&1 changed: [localhost] => {"availability_zones": ["us-west-2c"], "changed": true, "default_cooldown": 300, "desired_capacity": 2, "health_check_period": 300, "health_check_type": "EC2", "healthy_instances": 3, "in_service_instances": 3, "instance_facts": {"i-01fca335e1e29c65c": {"health_status": "Healthy", "launch_config_name": "terraform-5nlhqhrvt5e3taugrqzifth2su", "lifecycle_state": "InService"}, "i-03ea5a0be5b5b92a5" : {"health_status": "Healthy", "launch_config_name": "<snip>", "lifecycle_state": "InService"}, "i-0ad0d81d719fe7bc1": {"health_status": "Healthy", "launch_config_name": null, "lifecycle_state": "InService"}}, "instances": ["i-01fca335e1e29c65c", "i-03ea5a0be5b5b92a5", "i-0ad0d81d719fe7bc1"], "invocation": {"module_args": {"availability_zones": null, "aws_access_key": null, "aws_secret_key ": null, "default_cooldown": 300, "desired_capacity": 2, "ec2_url": null, "health_check_period": 300, "health_check_type": "EC2", "launch_config_name": "<snip>", "lc_check": true, "load_balancers": null, "max_size": 2, "min_size": 2, "name": "router-jljw-us-west-2c", "profile": null, "region": "us-west-2", "replace_all_instances": false, "replace_batch_size": 1, "replace_instances": ["i-00 65d89d324fe72df"], "security_token": null, "state": "present", "tags": [], "termination_policies": ["Default"], "validate_certs": true, "vpc_zone_identifier": null, "wait_for_instances": true, "wait_timeout": 300}, "module_name": "ec2_asg"}, "launch_config_name": "<snip>", "load_balancers":<snip>, "max_size": 2, "min_size": 2, "name": "my_asg", "pending_instanc es": 0, "placement_group": null, "tags": {"cleaner-destroy-after": "2016-06-14 16:49:17 +0000"}, "terminating_instances": 0, "termination_policies": ["Default"], "unhealthy_instances": 0, "viable_instances": 3, "vpc_zone_identifier": "subnet-f453acac"} ```
True
ec2_asg ignores replace_instances if lc_check is true - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> cloud/amazon/ec2_asg ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.0.0.1 config file = <snip> configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> N/A ##### SUMMARY <!--- Explain the problem briefly --> Running `ec2_asg` with `replace_instances` set to a single instance and `lc_check` set to `yes` against an ASG with multiple instances causes it to ignore `replace_instances` and replace a random instance in the ASG. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> - Spin up an ASG with min, max and desired > 1 - Change the launch configuration for the ASG - Run `ec2_asg`, specifying a single instance for `replace_instances`, and `lc_check` = `yes` It will choose a random instance from the instances in the ASG which have the old LC. This seems to stem from [these lines](https://github.com/ansible/ansible-modules-core/blob/7314cc3867eb90bc1c098e29265ae48670ad35b1/cloud/amazon/ec2_asg.py#L628-L633) ignoring the passed-in `initial_instances` and instead producing its own list of instances to be terminated. <!--- Paste example playbooks or commands between quotes below --> ``` ec2_asg: lc_check: yes replace_batch_size: 1 replace_instances: my_instance_id name: my_asg min_size: 3 max_size: 3 desired_capacity: 3 launch_config_name: my_lc region: us-west-2 ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> It would spin up a new instance in the ASG, and then terminate the instance I specified above ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> It sometimes terminates the one I specify, other times it terminates a different one. <!--- Paste verbatim command output between quotes below --> ``` TASK [Cycling | ec2_asg | Cycle instance (only if its launch configuration differs from that of the ASG)] *** task path: cycle-asg-instance-with-status-check.yml:16 Wednesday 15 June 2016 15:31:18 +0000 (0:00:00.020) 0:04:47.913 ******** ESTABLISH LOCAL CONNECTION FOR USER: admin 127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736 )" ) 127.0.0.1 PUT /tmp/tmpTZaAaD TO /home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/ec2_asg 127.0.0.1 EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/ec2_asg; rm -rf "/home/admin/.ansible/tmp/ansible-tmp-1466004678.61-36579860387736/" > /dev/null 2>&1 changed: [localhost] => {"availability_zones": ["us-west-2c"], "changed": true, "default_cooldown": 300, "desired_capacity": 2, "health_check_period": 300, "health_check_type": "EC2", "healthy_instances": 3, "in_service_instances": 3, "instance_facts": {"i-01fca335e1e29c65c": {"health_status": "Healthy", "launch_config_name": "terraform-5nlhqhrvt5e3taugrqzifth2su", "lifecycle_state": "InService"}, "i-03ea5a0be5b5b92a5" : {"health_status": "Healthy", "launch_config_name": "<snip>", "lifecycle_state": "InService"}, "i-0ad0d81d719fe7bc1": {"health_status": "Healthy", "launch_config_name": null, "lifecycle_state": "InService"}}, "instances": ["i-01fca335e1e29c65c", "i-03ea5a0be5b5b92a5", "i-0ad0d81d719fe7bc1"], "invocation": {"module_args": {"availability_zones": null, "aws_access_key": null, "aws_secret_key ": null, "default_cooldown": 300, "desired_capacity": 2, "ec2_url": null, "health_check_period": 300, "health_check_type": "EC2", "launch_config_name": "<snip>", "lc_check": true, "load_balancers": null, "max_size": 2, "min_size": 2, "name": "router-jljw-us-west-2c", "profile": null, "region": "us-west-2", "replace_all_instances": false, "replace_batch_size": 1, "replace_instances": ["i-00 65d89d324fe72df"], "security_token": null, "state": "present", "tags": [], "termination_policies": ["Default"], "validate_certs": true, "vpc_zone_identifier": null, "wait_for_instances": true, "wait_timeout": 300}, "module_name": "ec2_asg"}, "launch_config_name": "<snip>", "load_balancers":<snip>, "max_size": 2, "min_size": 2, "name": "my_asg", "pending_instanc es": 0, "placement_group": null, "tags": {"cleaner-destroy-after": "2016-06-14 16:49:17 +0000"}, "terminating_instances": 0, "termination_policies": ["Default"], "unhealthy_instances": 0, "viable_instances": 3, "vpc_zone_identifier": "subnet-f453acac"} ```
main
asg ignores replace instances if lc check is true issue type bug report component name cloud amazon asg ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary running asg with replace instances set to a single instance and lc check set to yes against an asg with multiple instances causes it to ignore replace instances and replace a random instance in the asg steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used spin up an asg with min max and desired change the launch configuration for the asg run asg specifying a single instance for replace instances and lc check yes it will choose a random instance from the instances in the asg which have the old lc this seems to stem from ignoring the passed in initial instances and instead producing its own list of instances to be terminated asg lc check yes replace batch size replace instances my instance id name my asg min size max size desired capacity launch config name my lc region us west expected results it would spin up a new instance in the asg and then terminate the instance i specified above actual results it sometimes terminates the one i specify other times it terminates a different one task task path cycle asg instance with status check yml wednesday june establish local connection for user admin exec umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put tmp tmptzaaad to home admin ansible tmp ansible tmp asg exec lang en us utf lc all en us utf lc messages en us utf usr bin python home admin ansible tmp ansible tmp asg rm rf home admin ansible tmp ansible tmp dev null changed availability zones changed true default cooldown desired capacity health check period health check type healthy instances in service instances instance facts i health status healthy launch config name terraform lifecycle state inservice i health status healthy launch config name lifecycle state inservice i health status healthy launch config name null lifecycle state inservice instances invocation module args availability zones null aws access key null aws secret key null default cooldown desired capacity url null health check period health check type launch config name lc check true load balancers null max size min size name router jljw us west profile null region us west replace all instances false replace batch size replace instances i security token null state present tags termination policies validate certs true vpc zone identifier null wait for instances true wait timeout module name asg launch config name load balancers max size min size name my asg pending instanc es placement group null tags cleaner destroy after terminating instances termination policies unhealthy instances viable instances vpc zone identifier subnet
1
1,344
5,721,693,133
IssuesEvent
2017-04-20 07:28:22
tomchentw/react-google-maps
https://api.github.com/repos/tomchentw/react-google-maps
closed
HeatmapLayer Broken
CALL_FOR_MAINTAINERS
HeatmapLayer should use `google.maps.visualization.HeatmapLayer` instead of just `google.maps.HeatmapLayer` in order to construct a HeatmapLayer.
True
HeatmapLayer Broken - HeatmapLayer should use `google.maps.visualization.HeatmapLayer` instead of just `google.maps.HeatmapLayer` in order to construct a HeatmapLayer.
main
heatmaplayer broken heatmaplayer should use google maps visualization heatmaplayer instead of just google maps heatmaplayer in order to construct a heatmaplayer
1
6,661
2,610,258,666
IssuesEvent
2015-02-26 19:22:29
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳激光治疗痘痘好吗
auto-migrated Priority-Medium Type-Defect
``` 深圳激光治疗痘痘好吗【深圳韩方科颜全国热线400-869-1818,24 小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:52
1.0
深圳激光治疗痘痘好吗 - ``` 深圳激光治疗痘痘好吗【深圳韩方科颜全国热线400-869-1818,24 小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:52
non_main
深圳激光治疗痘痘好吗 深圳激光治疗痘痘好吗【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 original issue reported on code google com by szft com on may at
0
186,449
14,394,699,316
IssuesEvent
2020-12-03 01:55:08
github-vet/rangeclosure-findings
https://api.github.com/repos/github-vet/rangeclosure-findings
closed
mraksoll4/lnd: nursery_store_test.go; 3 LoC
fresh test tiny
Found a possible issue in [mraksoll4/lnd](https://www.github.com/mraksoll4/lnd) at [nursery_store_test.go](https://github.com/mraksoll4/lnd/blob/e495a1057c2a4b9e3df37f2bac991cedcd64c89a/nursery_store_test.go#L159-L161) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/mraksoll4/lnd/blob/e495a1057c2a4b9e3df37f2bac991cedcd64c89a/nursery_store_test.go#L159-L161) <details> <summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary> ```go for _, htlcOutput := range test.htlcOutputs { assertCribAtExpiryHeight(t, ns, &htlcOutput) } ``` Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to htlcOutput at line 160 may start a goroutine </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: e495a1057c2a4b9e3df37f2bac991cedcd64c89a
1.0
mraksoll4/lnd: nursery_store_test.go; 3 LoC - Found a possible issue in [mraksoll4/lnd](https://www.github.com/mraksoll4/lnd) at [nursery_store_test.go](https://github.com/mraksoll4/lnd/blob/e495a1057c2a4b9e3df37f2bac991cedcd64c89a/nursery_store_test.go#L159-L161) The below snippet of Go code triggered static analysis which searches for goroutines and/or defer statements which capture loop variables. [Click here to see the code in its original context.](https://github.com/mraksoll4/lnd/blob/e495a1057c2a4b9e3df37f2bac991cedcd64c89a/nursery_store_test.go#L159-L161) <details> <summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary> ```go for _, htlcOutput := range test.htlcOutputs { assertCribAtExpiryHeight(t, ns, &htlcOutput) } ``` Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to htlcOutput at line 160 may start a goroutine </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: e495a1057c2a4b9e3df37f2bac991cedcd64c89a
non_main
lnd nursery store test go loc found a possible issue in at the below snippet of go code triggered static analysis which searches for goroutines and or defer statements which capture loop variables click here to show the line s of go which triggered the analyzer go for htlcoutput range test htlcoutputs assertcribatexpiryheight t ns htlcoutput below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to htlcoutput at line may start a goroutine leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
102,694
4,158,593,656
IssuesEvent
2016-06-17 03:53:59
BYU-ARCLITE/Ayamel-Examples
https://api.github.com/repos/BYU-ARCLITE/Ayamel-Examples
closed
CaptionAider: Creating a new subtitle track and saving/editing it multiple times creates multiple tracks
Bug CaptionAider Mac PC Priority 1
When I create captions in CaptionAider, it creates 2 caption tracks in both configuration and under the caption/subtitles menu. The tracks are identical unless you make any edits in CaptionAider. After that, the duplicate remains how it was before you made any edits. ![capture](https://cloud.githubusercontent.com/assets/10120523/14502669/aa446c34-0169-11e6-9d5a-5b7963ff4d1d.PNG)
1.0
CaptionAider: Creating a new subtitle track and saving/editing it multiple times creates multiple tracks - When I create captions in CaptionAider, it creates 2 caption tracks in both configuration and under the caption/subtitles menu. The tracks are identical unless you make any edits in CaptionAider. After that, the duplicate remains how it was before you made any edits. ![capture](https://cloud.githubusercontent.com/assets/10120523/14502669/aa446c34-0169-11e6-9d5a-5b7963ff4d1d.PNG)
non_main
captionaider creating a new subtitle track and saving editing it multiple times creates multiple tracks when i create captions in captionaider it creates caption tracks in both configuration and under the caption subtitles menu the tracks are identical unless you make any edits in captionaider after that the duplicate remains how it was before you made any edits
0
349,352
10,467,866,136
IssuesEvent
2019-09-22 09:19:43
SkyrimTogether/issues-game
https://api.github.com/repos/SkyrimTogether/issues-game
closed
Bug with getting respawned but immediatly hitted by an enemy and getting to the down state again
comp: client priority: 2 (medium) type: bug
## Description My friend had a bug, when he was killed by a Whiterun Guard. He went into a down state, waiting for a revive. When he typed /respawn command in console he stud up with a half of hp. But after getting attacked by a Guard he went into a down state with a 1hp again. When he tried to type /respawn command again the console was sending a request a revive message or so. Saying, that he's not dead yet to be respawned again. ## Steps to reproduce How to reproduce this issue: 1. Start the game. 2. Connect to the server. 3. Got yourself downed by the enemy. 4. Make sure, that the enemy is still hitting your corpse. 5. Type /respawn command in console. 6. Get yourself respawned on the same place when you got downed by the enemy with a half of hp. 7. Enemy is attacking you again. 8. You're going into a down state again with 1hp. 9. Type /respawn command again and see an error message in a console. Attaching latest dmp files: https://drive.google.com/file/d/1WuT0timchUL3cW5YWTi8oDBIkHUGsvlC/view?usp=sharing https://drive.google.com/open?id=1fWFId8otnBmW_xbNzhg7B97qWoWecXHq https://drive.google.com/open?id=1abWhvPHg8mwtLy5IOqhr_prcKh8t9VQo ## Reproduction rate Mostly all the times, when you're trying to get respawned, while your corpse is still getting attacked by an enemy. ## Expected result To be respawned in a Shrine immediately without getting attacked by an enemy. Or having a temporary invincibility to be able to travel to the Shrine without getting any dmg by an enemy. ## Your environment * Game edition (choose on which edition do you have problems): * The Elder Scrolls V: Skyrim Special Edition * Skyrim Together Mod ## Evidence (optional) Don't have an evidence for this bug, i'm sorry 👎
1.0
Bug with getting respawned but immediatly hitted by an enemy and getting to the down state again - ## Description My friend had a bug, when he was killed by a Whiterun Guard. He went into a down state, waiting for a revive. When he typed /respawn command in console he stud up with a half of hp. But after getting attacked by a Guard he went into a down state with a 1hp again. When he tried to type /respawn command again the console was sending a request a revive message or so. Saying, that he's not dead yet to be respawned again. ## Steps to reproduce How to reproduce this issue: 1. Start the game. 2. Connect to the server. 3. Got yourself downed by the enemy. 4. Make sure, that the enemy is still hitting your corpse. 5. Type /respawn command in console. 6. Get yourself respawned on the same place when you got downed by the enemy with a half of hp. 7. Enemy is attacking you again. 8. You're going into a down state again with 1hp. 9. Type /respawn command again and see an error message in a console. Attaching latest dmp files: https://drive.google.com/file/d/1WuT0timchUL3cW5YWTi8oDBIkHUGsvlC/view?usp=sharing https://drive.google.com/open?id=1fWFId8otnBmW_xbNzhg7B97qWoWecXHq https://drive.google.com/open?id=1abWhvPHg8mwtLy5IOqhr_prcKh8t9VQo ## Reproduction rate Mostly all the times, when you're trying to get respawned, while your corpse is still getting attacked by an enemy. ## Expected result To be respawned in a Shrine immediately without getting attacked by an enemy. Or having a temporary invincibility to be able to travel to the Shrine without getting any dmg by an enemy. ## Your environment * Game edition (choose on which edition do you have problems): * The Elder Scrolls V: Skyrim Special Edition * Skyrim Together Mod ## Evidence (optional) Don't have an evidence for this bug, i'm sorry 👎
non_main
bug with getting respawned but immediatly hitted by an enemy and getting to the down state again description my friend had a bug when he was killed by a whiterun guard he went into a down state waiting for a revive when he typed respawn command in console he stud up with a half of hp but after getting attacked by a guard he went into a down state with a again when he tried to type respawn command again the console was sending a request a revive message or so saying that he s not dead yet to be respawned again steps to reproduce how to reproduce this issue start the game connect to the server got yourself downed by the enemy make sure that the enemy is still hitting your corpse type respawn command in console get yourself respawned on the same place when you got downed by the enemy with a half of hp enemy is attacking you again you re going into a down state again with type respawn command again and see an error message in a console attaching latest dmp files reproduction rate mostly all the times when you re trying to get respawned while your corpse is still getting attacked by an enemy expected result to be respawned in a shrine immediately without getting attacked by an enemy or having a temporary invincibility to be able to travel to the shrine without getting any dmg by an enemy your environment game edition choose on which edition do you have problems the elder scrolls v skyrim special edition skyrim together mod evidence optional don t have an evidence for this bug i m sorry 👎
0
77,316
14,784,531,990
IssuesEvent
2021-01-12 00:25:34
streetcomplete/StreetComplete
https://api.github.com/repos/streetcomplete/StreetComplete
opened
Separate view data in quest data classes
code cleanup
### Task For the cycleway quest, the cycleway answer options are cleanly seperated into two files: `Cycleway.kt` contains the "pure data" (the enum) and `CyclewayItem` contains the view-related part of a cycleway answer option. This is better than how it is done for for example the surface quest (and others). There, in `Surface.kt`, the enum is not pure data because is contains reference to view stuff and also Android-specific stuff. So for these quest(s), the data class should be refactored to be like for the cycleway class. ### Example So it should be.... ```kotlin enum class Surface(val value: String) { // or osmValue? ASPHALT("asphalt"), ... } // in SurfaceItem.kt fun Surface.asItem(): Item<Surface> = ... ``` ### Reason Apart from it being good style, it is a step towards making #1892 a little less work to do ### Quests where this "old style" is used - BuildingType - Surface ### And then And there is more. Quest types should no longer have a `String` as parameter but an enum. Compare `AddRecyclingType` (new style) with `AddRailwayCrossingBarrier` (old style). It is okay for the enum to define an OSM value like in the example with surface above.
1.0
Separate view data in quest data classes - ### Task For the cycleway quest, the cycleway answer options are cleanly seperated into two files: `Cycleway.kt` contains the "pure data" (the enum) and `CyclewayItem` contains the view-related part of a cycleway answer option. This is better than how it is done for for example the surface quest (and others). There, in `Surface.kt`, the enum is not pure data because is contains reference to view stuff and also Android-specific stuff. So for these quest(s), the data class should be refactored to be like for the cycleway class. ### Example So it should be.... ```kotlin enum class Surface(val value: String) { // or osmValue? ASPHALT("asphalt"), ... } // in SurfaceItem.kt fun Surface.asItem(): Item<Surface> = ... ``` ### Reason Apart from it being good style, it is a step towards making #1892 a little less work to do ### Quests where this "old style" is used - BuildingType - Surface ### And then And there is more. Quest types should no longer have a `String` as parameter but an enum. Compare `AddRecyclingType` (new style) with `AddRailwayCrossingBarrier` (old style). It is okay for the enum to define an OSM value like in the example with surface above.
non_main
separate view data in quest data classes task for the cycleway quest the cycleway answer options are cleanly seperated into two files cycleway kt contains the pure data the enum and cyclewayitem contains the view related part of a cycleway answer option this is better than how it is done for for example the surface quest and others there in surface kt the enum is not pure data because is contains reference to view stuff and also android specific stuff so for these quest s the data class should be refactored to be like for the cycleway class example so it should be kotlin enum class surface val value string or osmvalue asphalt asphalt in surfaceitem kt fun surface asitem item reason apart from it being good style it is a step towards making a little less work to do quests where this old style is used buildingtype surface and then and there is more quest types should no longer have a string as parameter but an enum compare addrecyclingtype new style with addrailwaycrossingbarrier old style it is okay for the enum to define an osm value like in the example with surface above
0
4,210
6,447,246,796
IssuesEvent
2017-08-14 05:55:39
inveniosoftware/invenio-accounts
https://api.github.com/repos/inveniosoftware/invenio-accounts
closed
ext: make monkey patching optional
Service: INSPIRE Type: RFC
The release of `invenio-accounts==1.0.0b7` led to a failure in our build: https://travis-ci.org/inspirehep/inspire-next/jobs/262243957. In particular, what seems to be failing is the fact that the user in our acceptance is not logged in, that is [our mechanism](https://github.com/inspirehep/inspire-next/blob/12da525d1b939bcd4994e36cbb52492d3e140029/tests/acceptance/conftest.py#L88-L104) for saving/restoring cookies to bypass manual authentication is no longer working. In fact, I see some commits between `invenio-accounts==1.0.0b6` and `invenio-accounts=1.0.0b7` that seem to relate to this: https://github.com/inveniosoftware/invenio-accounts/commit/c80ee61e686d8929dde6c0d7fad770729b7b12a8, https://github.com/inveniosoftware/invenio-accounts/commit/b611762ed65ce064c7743cf5847f497a32caf2f7, and https://github.com/inveniosoftware/invenio-accounts/commit/0d22f84bef21a0a25581abc1f3ac85c51a7f1055. Note that [`login_user_via_session`](https://github.com/inveniosoftware/invenio-accounts/blob/4b1e8870ed3b6620adf35e64636a012a6492e093/invenio_accounts/testutils.py#L70-L81) won't work for us here as we have no `app.test_client` object to use, as the `acceptance` test suite is interacting through Selenium with **another** application. One possible fix is outlined in the title: we make the above monkey patching configurable, and we disable it in the configuration of INSPIRE. Another possibility is that we revert https://github.com/inspirehep/inspire-next/commit/63043361c5508c31d7c26ed728916f678c40589c on our side, but this incurs in a performance penalty in our test suite. There's probably more, but this is all I can think of right now... CC: @lnielsen
1.0
ext: make monkey patching optional - The release of `invenio-accounts==1.0.0b7` led to a failure in our build: https://travis-ci.org/inspirehep/inspire-next/jobs/262243957. In particular, what seems to be failing is the fact that the user in our acceptance is not logged in, that is [our mechanism](https://github.com/inspirehep/inspire-next/blob/12da525d1b939bcd4994e36cbb52492d3e140029/tests/acceptance/conftest.py#L88-L104) for saving/restoring cookies to bypass manual authentication is no longer working. In fact, I see some commits between `invenio-accounts==1.0.0b6` and `invenio-accounts=1.0.0b7` that seem to relate to this: https://github.com/inveniosoftware/invenio-accounts/commit/c80ee61e686d8929dde6c0d7fad770729b7b12a8, https://github.com/inveniosoftware/invenio-accounts/commit/b611762ed65ce064c7743cf5847f497a32caf2f7, and https://github.com/inveniosoftware/invenio-accounts/commit/0d22f84bef21a0a25581abc1f3ac85c51a7f1055. Note that [`login_user_via_session`](https://github.com/inveniosoftware/invenio-accounts/blob/4b1e8870ed3b6620adf35e64636a012a6492e093/invenio_accounts/testutils.py#L70-L81) won't work for us here as we have no `app.test_client` object to use, as the `acceptance` test suite is interacting through Selenium with **another** application. One possible fix is outlined in the title: we make the above monkey patching configurable, and we disable it in the configuration of INSPIRE. Another possibility is that we revert https://github.com/inspirehep/inspire-next/commit/63043361c5508c31d7c26ed728916f678c40589c on our side, but this incurs in a performance penalty in our test suite. There's probably more, but this is all I can think of right now... CC: @lnielsen
non_main
ext make monkey patching optional the release of invenio accounts led to a failure in our build in particular what seems to be failing is the fact that the user in our acceptance is not logged in that is for saving restoring cookies to bypass manual authentication is no longer working in fact i see some commits between invenio accounts and invenio accounts that seem to relate to this and note that won t work for us here as we have no app test client object to use as the acceptance test suite is interacting through selenium with another application one possible fix is outlined in the title we make the above monkey patching configurable and we disable it in the configuration of inspire another possibility is that we revert on our side but this incurs in a performance penalty in our test suite there s probably more but this is all i can think of right now cc lnielsen
0
1,407
6,041,090,568
IssuesEvent
2017-06-10 20:38:58
duckduckgo/zeroclickinfo-goodies
https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies
closed
SVN Cheat Sheet: Add description for `svn log`
Improvement Maintainer Approved Suggestion
Hi @Juholei nice and helpful Goody! I came along a few times and missed the `svn log` statement to display commit log messages with some helpful arguments (-l, -v). http://svnbook.red-bean.com/en/1.7/svn.ref.svn.c.log.html Any thoughts on adding it? ------ IA Page: http://duck.co/ia/view/svn_cheat_sheet [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @Juholei
True
SVN Cheat Sheet: Add description for `svn log` - Hi @Juholei nice and helpful Goody! I came along a few times and missed the `svn log` statement to display commit log messages with some helpful arguments (-l, -v). http://svnbook.red-bean.com/en/1.7/svn.ref.svn.c.log.html Any thoughts on adding it? ------ IA Page: http://duck.co/ia/view/svn_cheat_sheet [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @Juholei
main
svn cheat sheet add description for svn log hi juholei nice and helpful goody i came along a few times and missed the svn log statement to display commit log messages with some helpful arguments l v any thoughts on adding it ia page juholei
1
4,887
25,074,975,381
IssuesEvent
2022-11-07 14:53:47
BioArchLinux/Packages
https://api.github.com/repos/BioArchLinux/Packages
closed
[MAINTAIN] packages influenced by openssl
maintain
<!-- Please report the error of one package in one issue! Use multi issues to report multi bugs. Thanks! --> **Packages List** <details> - [x] htslib-1.16-2: usr/lib/htslib/plugins/hfile_s3.so (libcrypto.so.1.1 - [x] phylosuite-1.2.2-5: usr/bin/phylosuite/libssl.so.1.1 (libcrypto.so.1.1) - [x] python2-2.7.18-4: usr/lib/python2.7/lib-dynload/_ssl.so (libssl.so.1.1) - [x] qt4-4.8.7-40: usr/lib/libQtNetwork.so.4.8.7 (libssl.so.1.1) - [x] r-hdf5array-1.26.0-1: usr/lib/R/library/HDF5Array/libs/HDF5Array.so (libcrypto.so.1.1) - [x] r-openssl-2.0.4-1: usr/lib/R/library/openssl/libs/openssl.so (libssl.so.1.1) - [x] r-rhdf5-2.42.0-1: usr/lib/R/library/rhdf5/libs/rhdf5.so (libcrypto.so.1.1) - [x] r-rserve-1.8.10-4: usr/lib/R/library/Rserve/libs/Rserve.so (libssl.so.1.1) - [x] r-s2-1.1.0-2: usr/lib/R/library/s2/libs/s2.so (libcrypto.so.1.1) - [x] seqlib-1.2.0-3: usr/bin/seqtools (libcrypto.so.1.1) - [x] shapeit4-4.2.2-3: usr/bin/shapeit4 (libssl.so.1.1) </details> **Packages (please complete the following information):** - Package Name: [e.g. iqtree] **Description** Add any other context about the problem here.
True
[MAINTAIN] packages influenced by openssl - <!-- Please report the error of one package in one issue! Use multi issues to report multi bugs. Thanks! --> **Packages List** <details> - [x] htslib-1.16-2: usr/lib/htslib/plugins/hfile_s3.so (libcrypto.so.1.1 - [x] phylosuite-1.2.2-5: usr/bin/phylosuite/libssl.so.1.1 (libcrypto.so.1.1) - [x] python2-2.7.18-4: usr/lib/python2.7/lib-dynload/_ssl.so (libssl.so.1.1) - [x] qt4-4.8.7-40: usr/lib/libQtNetwork.so.4.8.7 (libssl.so.1.1) - [x] r-hdf5array-1.26.0-1: usr/lib/R/library/HDF5Array/libs/HDF5Array.so (libcrypto.so.1.1) - [x] r-openssl-2.0.4-1: usr/lib/R/library/openssl/libs/openssl.so (libssl.so.1.1) - [x] r-rhdf5-2.42.0-1: usr/lib/R/library/rhdf5/libs/rhdf5.so (libcrypto.so.1.1) - [x] r-rserve-1.8.10-4: usr/lib/R/library/Rserve/libs/Rserve.so (libssl.so.1.1) - [x] r-s2-1.1.0-2: usr/lib/R/library/s2/libs/s2.so (libcrypto.so.1.1) - [x] seqlib-1.2.0-3: usr/bin/seqtools (libcrypto.so.1.1) - [x] shapeit4-4.2.2-3: usr/bin/shapeit4 (libssl.so.1.1) </details> **Packages (please complete the following information):** - Package Name: [e.g. iqtree] **Description** Add any other context about the problem here.
main
packages influenced by openssl please report the error of one package in one issue use multi issues to report multi bugs thanks packages list htslib usr lib htslib plugins hfile so libcrypto so phylosuite usr bin phylosuite libssl so libcrypto so usr lib lib dynload ssl so libssl so usr lib libqtnetwork so libssl so r usr lib r library libs so libcrypto so r openssl usr lib r library openssl libs openssl so libssl so r usr lib r library libs so libcrypto so r rserve usr lib r library rserve libs rserve so libssl so r usr lib r library libs so libcrypto so seqlib usr bin seqtools libcrypto so usr bin libssl so packages please complete the following information package name description add any other context about the problem here
1
1,890
6,577,533,141
IssuesEvent
2017-09-12 01:34:35
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Reopen s3 relative path issue #1907
affects_2.0 aws bug_report cloud waiting_on_maintainer
##### Issue Type: - Bug Report ##### Plugin Name: s3 ##### Ansible Version: 2.0.1.0-1 ##### Ansible Configuration: n/a ##### Environment: Alpine Linux: Edge ##### Summary: Relative paths to `files` directory within roles does not work. This is a re-open of #1907. ##### Steps To Reproduce: See #1907 as he details the problem really well. ##### Expected Results: See #1907 as he details the problem really well. ##### Actual Results: See #1907 as he details the problem really well.
True
Reopen s3 relative path issue #1907 - ##### Issue Type: - Bug Report ##### Plugin Name: s3 ##### Ansible Version: 2.0.1.0-1 ##### Ansible Configuration: n/a ##### Environment: Alpine Linux: Edge ##### Summary: Relative paths to `files` directory within roles does not work. This is a re-open of #1907. ##### Steps To Reproduce: See #1907 as he details the problem really well. ##### Expected Results: See #1907 as he details the problem really well. ##### Actual Results: See #1907 as he details the problem really well.
main
reopen relative path issue issue type bug report plugin name ansible version ansible configuration n a environment alpine linux edge summary relative paths to files directory within roles does not work this is a re open of steps to reproduce see as he details the problem really well expected results see as he details the problem really well actual results see as he details the problem really well
1
364
3,343,624,183
IssuesEvent
2015-11-15 17:14:02
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
opened
Delete stale branches
awaiting maintainer feedback
Was looking at the branches, and it seems like there's a bit of cruft. Figured I'd make an issue just to confirm instead of deleting without feedback. Are the following branches safe to delete? ``` revert-10854-master : Updated 7 months ago by alebcay f-https-sourceforge-urls: Updated 8 months ago by phinze fix-alfred-preference-install: Updated 2 years ago by phinze audit-links: Updated 3 years ago by phinze gh-pages: Updated 3 years ago by phinze (caskroom/caskroom.github.io seems to take care of this) ```
True
Delete stale branches - Was looking at the branches, and it seems like there's a bit of cruft. Figured I'd make an issue just to confirm instead of deleting without feedback. Are the following branches safe to delete? ``` revert-10854-master : Updated 7 months ago by alebcay f-https-sourceforge-urls: Updated 8 months ago by phinze fix-alfred-preference-install: Updated 2 years ago by phinze audit-links: Updated 3 years ago by phinze gh-pages: Updated 3 years ago by phinze (caskroom/caskroom.github.io seems to take care of this) ```
main
delete stale branches was looking at the branches and it seems like there s a bit of cruft figured i d make an issue just to confirm instead of deleting without feedback are the following branches safe to delete revert master updated months ago by alebcay f https sourceforge urls updated months ago by phinze fix alfred preference install updated years ago by phinze audit links updated years ago by phinze gh pages updated years ago by phinze caskroom caskroom github io seems to take care of this
1
861
4,532,975,849
IssuesEvent
2016-09-08 09:54:22
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
Undocumented options in network/a10_server.py
docs_report in progress networking waiting_on_maintainer
##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME network/a10_server.py ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY In the a10_server module,```write_config``` and ```validate_certs```are valid options but the documentation does not mention them.
True
Undocumented options in network/a10_server.py - ##### ISSUE TYPE - Documentation Report ##### COMPONENT NAME network/a10_server.py ##### ANSIBLE VERSION ``` ansible 2.1.1.0 config file = configured module search path = Default w/o overrides ``` ##### OS / ENVIRONMENT N/A ##### SUMMARY In the a10_server module,```write_config``` and ```validate_certs```are valid options but the documentation does not mention them.
main
undocumented options in network server py issue type documentation report component name network server py ansible version ansible config file configured module search path default w o overrides os environment n a summary in the server module write config and validate certs are valid options but the documentation does not mention them
1
1,062
4,877,234,082
IssuesEvent
2016-11-16 15:14:17
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
service task: unsupported parameter for module: runlevel against Ubutun 16 LTS
affects_2.2 bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> service ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> MacOS X 10.11.6 (15G1108) ##### SUMMARY <!--- Explain the problem briefly --> We using a simple task service with runlevel against a Ubuntu 16.04.1 LTS (using systemd), I got the following error message: "unsupported parameter for module: runlevel" ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> The task is like this: ````service: name=my_service_name runlevel=99 enabled=yes state=started```` and the sysV init script file exists on `/etc/init.d/my_service_name` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> A failure about usage of runlevel option. I know systemd module now exists but it was working just fine on Ansible 2.1.3.0 and I didn't read anything about deprecating this feature. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> ```` fatal: [machine]: FAILED! => {"changed": false, "failed": true, "msg": "unsupported parameter for module: runlevel"} ````
True
service task: unsupported parameter for module: runlevel against Ubutun 16 LTS - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> service ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> MacOS X 10.11.6 (15G1108) ##### SUMMARY <!--- Explain the problem briefly --> We using a simple task service with runlevel against a Ubuntu 16.04.1 LTS (using systemd), I got the following error message: "unsupported parameter for module: runlevel" ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> The task is like this: ````service: name=my_service_name runlevel=99 enabled=yes state=started```` and the sysV init script file exists on `/etc/init.d/my_service_name` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> A failure about usage of runlevel option. I know systemd module now exists but it was working just fine on Ansible 2.1.3.0 and I didn't read anything about deprecating this feature. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> ```` fatal: [machine]: FAILED! => {"changed": false, "failed": true, "msg": "unsupported parameter for module: runlevel"} ````
main
service task unsupported parameter for module runlevel against ubutun lts issue type bug report component name service ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific macos x summary we using a simple task service with runlevel against a ubuntu lts using systemd i got the following error message unsupported parameter for module runlevel steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used the task is like this service name my service name runlevel enabled yes state started and the sysv init script file exists on etc init d my service name expected results a failure about usage of runlevel option i know systemd module now exists but it was working just fine on ansible and i didn t read anything about deprecating this feature actual results fatal failed changed false failed true msg unsupported parameter for module runlevel
1
41,765
5,396,445,258
IssuesEvent
2017-02-27 11:44:19
RestComm/jain-slee.sip
https://api.github.com/repos/RestComm/jain-slee.sip
closed
Caching terminateOnBye flag in DialogWrapper.
2. Enhancement Testing
This feature is needed when we create new Dialog (ClientDialogWrapper) with DialogActivity getNewDialog(Address from, Address to) method: ``` java DialogActivity caleeDialog = getSleeSipProvider().getNewDialog(remoteParty, localParty); caleeDialog.terminateOnBye(false); ``` Now we have exception because wrappedDialog is null: > https://github.com/RestComm/jain-slee.sip/blob/master/resources/sip11/ra/src/main/java/org/mobicents/slee/resource/sip11/wrappers/ClientDialogWrapper.java#L545 Solution is caching terminateOnBye before we can set wrappedDialog.terminateOnBy(): > https://github.com/RestComm/jain-slee.sip/blob/master/resources/sip11/ra/src/main/java/org/mobicents/slee/resource/sip11/wrappers/DialogWrapper.java#L679
1.0
Caching terminateOnBye flag in DialogWrapper. - This feature is needed when we create new Dialog (ClientDialogWrapper) with DialogActivity getNewDialog(Address from, Address to) method: ``` java DialogActivity caleeDialog = getSleeSipProvider().getNewDialog(remoteParty, localParty); caleeDialog.terminateOnBye(false); ``` Now we have exception because wrappedDialog is null: > https://github.com/RestComm/jain-slee.sip/blob/master/resources/sip11/ra/src/main/java/org/mobicents/slee/resource/sip11/wrappers/ClientDialogWrapper.java#L545 Solution is caching terminateOnBye before we can set wrappedDialog.terminateOnBy(): > https://github.com/RestComm/jain-slee.sip/blob/master/resources/sip11/ra/src/main/java/org/mobicents/slee/resource/sip11/wrappers/DialogWrapper.java#L679
non_main
caching terminateonbye flag in dialogwrapper this feature is needed when we create new dialog clientdialogwrapper with dialogactivity getnewdialog address from address to method java dialogactivity caleedialog getsleesipprovider getnewdialog remoteparty localparty caleedialog terminateonbye false now we have exception because wrappeddialog is null solution is caching terminateonbye before we can set wrappeddialog terminateonby
0
316,711
9,653,854,261
IssuesEvent
2019-05-19 08:59:57
cilium/cilium
https://api.github.com/repos/cilium/cilium
closed
Extraneous warning log messages while deleting endpoint ("Ignoring error while deleting endpoint")
priority/low stale
Hit in #7277 (~master), but doesn't appear related to the PR: https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/10501/testReport/junit/k8s-1/13/K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled/ During endpoint deletion, we hit these errors: ``` 2019-03-06T02:41:13.498482778Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="Unable to delete key 10.10.1.21 from /sys/fs/bpf/tc/globals/cilium_lxc: Unable to delete element from map cilium_lxc: no such file or directory" subsys=daemon 2019-03-06T02:41:13.49848667Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="unable to remove endpoint from global policy map: Unable to delete element from map cilium_policy: no such file or directory" subsys=daemon ``` The "no such file or directory" likely just means that we are attempting to remove an element from the map and the element was already removed from the map. <details> <summary>cilium.log filtered by endpointID (below the fold, open to see)</summary> 2019-03-06T02:39:43.401641147Z level=info msg="New endpoint" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:39:43.401679297Z level=debug msg="Refreshing labels of endpoint" containerID=6673a04111 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" infoLabels="k8s:controller-revision-hash=56cf897587,k8s:pod-template-generation=1" subsys=endpoint 2019-03-06T02:39:43.40168597Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:pod-template-generation Value:1 Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.401689447Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:controller-revision-hash Value:56cf897587 Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.401717094Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.namespace Value:default Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.401723877Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.cilium.k8s.policy.serviceaccount Value:default Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.401727794Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.cilium.k8s.policy.cluster Value:default Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.40173152Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:zgroup Value:testDSClient Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.40176299Z level=debug msg="Endpoint has reserved identity, changing synchronously" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:39:43.40177066Z level=debug msg="Resolving identity for labels" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:39:45.33361152Z level=debug msg="Associated container event with endpoint" containerID=6673a04111 containerName=/k8s_POD_testclient-596bw_default_113b2a39-3fb9-11e9-8ea7-080027051dad_1 endpointID=1968 maxRetry=20 retry=2 subsys=workload-watcher willRetry=true 2019-03-06T02:39:45.343950813Z level=debug msg="Refreshing labels of endpoint" containerID=6673a04111 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" infoLabels="container:annotation.kubernetes.io/config.seen=2019-03-06T02:39:30.205869039Z,container:annotation.kubernetes.io/config.source=api,container:io.kubernetes.container.name=POD,container:io.kubernetes.docker.type=podsandbox,container:io.kubernetes.pod.name=testclient-596bw,container:io.kubernetes.pod.uid=113b2a39-3fb9-11e9-8ea7-080027051dad,k8s:controller-revision-hash=56cf897587,k8s:pod-template-generation=1" subsys=endpoint 2019-03-06T02:39:45.343987734Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.docker.type Value:podsandbox Source:container}" subsys=endpoint 2019-03-06T02:39:45.344018277Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:annotation.kubernetes.io/config.source Value:api Source:container}" subsys=endpoint 2019-03-06T02:39:45.344023001Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:annotation.kubernetes.io/config.seen Value:2019-03-06T02:39:30.205869039Z Source:container}" subsys=endpoint 2019-03-06T02:39:45.344025616Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.container.name Value:POD Source:container}" subsys=endpoint 2019-03-06T02:39:45.344049539Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.name Value:testclient-596bw Source:container}" subsys=endpoint 2019-03-06T02:39:45.344054119Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.uid Value:113b2a39-3fb9-11e9-8ea7-080027051dad Source:container}" subsys=endpoint 2019-03-06T02:39:53.41266576Z level=debug msg="Deleting CEP on first run" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:40:13.440507596Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:40:23.440877672Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:40:33.441221111Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:40:43.443382057Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:40:53.443486945Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:41:03.447793751Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:41:13.433721275Z level=debug msg="Deleting endpoint" code=OK containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 endpointState=disconnecting ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw policyRevision=0 subsys=endpoint type=0 2019-03-06T02:41:13.43372899Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.43373182Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_next_fail endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.43373424Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_next endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.43374721Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_stale endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.498430541Z level=debug msg="Endpoint removed" code=OK containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 endpointState=disconnected ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw policyRevision=0 subsys=endpoint type=0 2019-03-06T02:41:13.498433116Z level=info msg="Removed endpoint" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.498470701Z level=debug msg="Waiting for proxy updates to complete..." containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.498478653Z level=debug msg="Wait time for proxy updates: 15.136µs" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.498482778Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="Unable to delete key 10.10.1.21 from /sys/fs/bpf/tc/globals/cilium_lxc: Unable to delete element from map cilium_lxc: no such file or directory" subsys=daemon 2019-03-06T02:41:13.49848667Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="unable to remove endpoint from global policy map: Unable to delete element from map cilium_policy: no such file or directory" subsys=daemon </details> [pod-kube-system-cilium-f8gx9-cilium-agent.log](https://github.com/cilium/cilium/files/2934898/pod-kube-system-cilium-f8gx9-cilium-agent.log) [e2d99aa6_K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled.zip](https://github.com/cilium/cilium/files/2934899/e2d99aa6_K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled.zip)
1.0
Extraneous warning log messages while deleting endpoint ("Ignoring error while deleting endpoint") - Hit in #7277 (~master), but doesn't appear related to the PR: https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-Validated/10501/testReport/junit/k8s-1/13/K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled/ During endpoint deletion, we hit these errors: ``` 2019-03-06T02:41:13.498482778Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="Unable to delete key 10.10.1.21 from /sys/fs/bpf/tc/globals/cilium_lxc: Unable to delete element from map cilium_lxc: no such file or directory" subsys=daemon 2019-03-06T02:41:13.49848667Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="unable to remove endpoint from global policy map: Unable to delete element from map cilium_policy: no such file or directory" subsys=daemon ``` The "no such file or directory" likely just means that we are attempting to remove an element from the map and the element was already removed from the map. <details> <summary>cilium.log filtered by endpointID (below the fold, open to see)</summary> 2019-03-06T02:39:43.401641147Z level=info msg="New endpoint" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:39:43.401679297Z level=debug msg="Refreshing labels of endpoint" containerID=6673a04111 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" infoLabels="k8s:controller-revision-hash=56cf897587,k8s:pod-template-generation=1" subsys=endpoint 2019-03-06T02:39:43.40168597Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:pod-template-generation Value:1 Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.401689447Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:controller-revision-hash Value:56cf897587 Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.401717094Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.namespace Value:default Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.401723877Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.cilium.k8s.policy.serviceaccount Value:default Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.401727794Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.cilium.k8s.policy.cluster Value:default Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.40173152Z level=debug msg="Assigning security relevant label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:zgroup Value:testDSClient Source:k8s}" subsys=endpoint 2019-03-06T02:39:43.40176299Z level=debug msg="Endpoint has reserved identity, changing synchronously" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:39:43.40177066Z level=debug msg="Resolving identity for labels" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:39:45.33361152Z level=debug msg="Associated container event with endpoint" containerID=6673a04111 containerName=/k8s_POD_testclient-596bw_default_113b2a39-3fb9-11e9-8ea7-080027051dad_1 endpointID=1968 maxRetry=20 retry=2 subsys=workload-watcher willRetry=true 2019-03-06T02:39:45.343950813Z level=debug msg="Refreshing labels of endpoint" containerID=6673a04111 endpointID=1968 identityLabels="k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=default,k8s:io.kubernetes.pod.namespace=default,k8s:zgroup=testDSClient" infoLabels="container:annotation.kubernetes.io/config.seen=2019-03-06T02:39:30.205869039Z,container:annotation.kubernetes.io/config.source=api,container:io.kubernetes.container.name=POD,container:io.kubernetes.docker.type=podsandbox,container:io.kubernetes.pod.name=testclient-596bw,container:io.kubernetes.pod.uid=113b2a39-3fb9-11e9-8ea7-080027051dad,k8s:controller-revision-hash=56cf897587,k8s:pod-template-generation=1" subsys=endpoint 2019-03-06T02:39:45.343987734Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.docker.type Value:podsandbox Source:container}" subsys=endpoint 2019-03-06T02:39:45.344018277Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:annotation.kubernetes.io/config.source Value:api Source:container}" subsys=endpoint 2019-03-06T02:39:45.344023001Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:annotation.kubernetes.io/config.seen Value:2019-03-06T02:39:30.205869039Z Source:container}" subsys=endpoint 2019-03-06T02:39:45.344025616Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.container.name Value:POD Source:container}" subsys=endpoint 2019-03-06T02:39:45.344049539Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.name Value:testclient-596bw Source:container}" subsys=endpoint 2019-03-06T02:39:45.344054119Z level=debug msg="Assigning information label" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw obj="{Key:io.kubernetes.pod.uid Value:113b2a39-3fb9-11e9-8ea7-080027051dad Source:container}" subsys=endpoint 2019-03-06T02:39:53.41266576Z level=debug msg="Deleting CEP on first run" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:40:13.440507596Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:40:23.440877672Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:40:33.441221111Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:40:43.443382057Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:40:53.443486945Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:41:03.447793751Z level=debug msg="Skipping CiliumEndpoint update because it has not changed" containerID=6673a04111 controller="sync-to-k8s-ciliumendpoint (1968)" datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpointsynchronizer 2019-03-06T02:41:13.433721275Z level=debug msg="Deleting endpoint" code=OK containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 endpointState=disconnecting ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw policyRevision=0 subsys=endpoint type=0 2019-03-06T02:41:13.43372899Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.43373182Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_next_fail endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.43373424Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_next endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.43374721Z level=debug msg="removing directory" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 directory=1968_stale endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.498430541Z level=debug msg="Endpoint removed" code=OK containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 endpointState=disconnected ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw policyRevision=0 subsys=endpoint type=0 2019-03-06T02:41:13.498433116Z level=info msg="Removed endpoint" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.498470701Z level=debug msg="Waiting for proxy updates to complete..." containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.498478653Z level=debug msg="Wait time for proxy updates: 15.136µs" containerID=6673a04111 datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1968 ipv4=10.10.1.21 ipv6= k8sPodName=default/testclient-596bw subsys=endpoint 2019-03-06T02:41:13.498482778Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="Unable to delete key 10.10.1.21 from /sys/fs/bpf/tc/globals/cilium_lxc: Unable to delete element from map cilium_lxc: no such file or directory" subsys=daemon 2019-03-06T02:41:13.49848667Z level=warning msg="Ignoring error while deleting endpoint" endpointID=1968 error="unable to remove endpoint from global policy map: Unable to delete element from map cilium_policy: no such file or directory" subsys=daemon </details> [pod-kube-system-cilium-f8gx9-cilium-agent.log](https://github.com/cilium/cilium/files/2934898/pod-kube-system-cilium-f8gx9-cilium-agent.log) [e2d99aa6_K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled.zip](https://github.com/cilium/cilium/files/2934899/e2d99aa6_K8sDatapathConfig_IPv4Only_Check_connectivity_with_IPv6_disabled.zip)
non_main
extraneous warning log messages while deleting endpoint ignoring error while deleting endpoint hit in master but doesn t appear related to the pr during endpoint deletion we hit these errors level warning msg ignoring error while deleting endpoint endpointid error unable to delete key from sys fs bpf tc globals cilium lxc unable to delete element from map cilium lxc no such file or directory subsys daemon level warning msg ignoring error while deleting endpoint endpointid error unable to remove endpoint from global policy map unable to delete element from map cilium policy no such file or directory subsys daemon the no such file or directory likely just means that we are attempting to remove an element from the map and the element was already removed from the map cilium log filtered by endpointid below the fold open to see level info msg new endpoint containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpoint level debug msg refreshing labels of endpoint containerid endpointid identitylabels io cilium policy cluster default io cilium policy serviceaccount default io kubernetes pod namespace default zgroup testdsclient infolabels controller revision hash pod template generation subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key pod template generation value source subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key controller revision hash value source subsys endpoint level debug msg assigning security relevant label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io kubernetes pod namespace value default source subsys endpoint level debug msg assigning security relevant label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io cilium policy serviceaccount value default source subsys endpoint level debug msg assigning security relevant label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io cilium policy cluster value default source subsys endpoint level debug msg assigning security relevant label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key zgroup value testdsclient source subsys endpoint level debug msg endpoint has reserved identity changing synchronously containerid datapathpolicyrevision desiredpolicyrevision endpointid identitylabels io cilium policy cluster default io cilium policy serviceaccount default io kubernetes pod namespace default zgroup testdsclient default testclient subsys endpoint level debug msg resolving identity for labels containerid datapathpolicyrevision desiredpolicyrevision endpointid identitylabels io cilium policy cluster default io cilium policy serviceaccount default io kubernetes pod namespace default zgroup testdsclient default testclient subsys endpoint level debug msg associated container event with endpoint containerid containername pod testclient default endpointid maxretry retry subsys workload watcher willretry true level debug msg refreshing labels of endpoint containerid endpointid identitylabels io cilium policy cluster default io cilium policy serviceaccount default io kubernetes pod namespace default zgroup testdsclient infolabels container annotation kubernetes io config seen container annotation kubernetes io config source api container io kubernetes container name pod container io kubernetes docker type podsandbox container io kubernetes pod name testclient container io kubernetes pod uid controller revision hash pod template generation subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io kubernetes docker type value podsandbox source container subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key annotation kubernetes io config source value api source container subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key annotation kubernetes io config seen value source container subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io kubernetes container name value pod source container subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io kubernetes pod name value testclient source container subsys endpoint level debug msg assigning information label containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient obj key io kubernetes pod uid value source container subsys endpoint level debug msg deleting cep on first run containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg skipping ciliumendpoint update because it has not changed containerid controller sync to ciliumendpoint datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpointsynchronizer level debug msg deleting endpoint code ok containerid datapathpolicyrevision desiredpolicyrevision endpointid endpointstate disconnecting default testclient policyrevision subsys endpoint type level debug msg removing directory containerid datapathpolicyrevision desiredpolicyrevision directory endpointid default testclient subsys endpoint level debug msg removing directory containerid datapathpolicyrevision desiredpolicyrevision directory next fail endpointid default testclient subsys endpoint level debug msg removing directory containerid datapathpolicyrevision desiredpolicyrevision directory next endpointid default testclient subsys endpoint level debug msg removing directory containerid datapathpolicyrevision desiredpolicyrevision directory stale endpointid default testclient subsys endpoint level debug msg endpoint removed code ok containerid datapathpolicyrevision desiredpolicyrevision endpointid endpointstate disconnected default testclient policyrevision subsys endpoint type level info msg removed endpoint containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpoint level debug msg waiting for proxy updates to complete containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpoint level debug msg wait time for proxy updates containerid datapathpolicyrevision desiredpolicyrevision endpointid default testclient subsys endpoint level warning msg ignoring error while deleting endpoint endpointid error unable to delete key from sys fs bpf tc globals cilium lxc unable to delete element from map cilium lxc no such file or directory subsys daemon level warning msg ignoring error while deleting endpoint endpointid error unable to remove endpoint from global policy map unable to delete element from map cilium policy no such file or directory subsys daemon
0
60,121
14,518,936,807
IssuesEvent
2020-12-14 01:23:22
olivialancaster/amplify-cli
https://api.github.com/repos/olivialancaster/amplify-cli
opened
CVE-2020-7788 (High) detected in ini-1.3.5.tgz, ini-1.3.4.tgz
security vulnerability
## CVE-2020-7788 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ini-1.3.5.tgz</b>, <b>ini-1.3.4.tgz</b></p></summary> <p> <details><summary><b>ini-1.3.5.tgz</b></p></summary> <p>An ini encoder/decoder for node</p> <p>Library home page: <a href="https://registry.npmjs.org/ini/-/ini-1.3.5.tgz">https://registry.npmjs.org/ini/-/ini-1.3.5.tgz</a></p> <p>Path to dependency file: amplify-cli/packages/amplify-cli/package.json</p> <p>Path to vulnerable library: amplify-cli/packages/amplify-codegen/node_modules/npm/node_modules/ini/package.json,amplify-cli/packages/amplify-codegen/node_modules/npm/node_modules/ini/package.json,amplify-cli/packages/amplify-codegen/node_modules/npm/node_modules/ini/package.json</p> <p> Dependency Hierarchy: - aws-appsync-codegen-0.17.5.tgz (Root Library) - npm-6.14.9.tgz - :x: **ini-1.3.5.tgz** (Vulnerable Library) </details> <details><summary><b>ini-1.3.4.tgz</b></p></summary> <p>An ini encoder/decoder for node</p> <p>Library home page: <a href="https://registry.npmjs.org/ini/-/ini-1.3.4.tgz">https://registry.npmjs.org/ini/-/ini-1.3.4.tgz</a></p> <p>Path to dependency file: amplify-cli/packages/amplify-cli/package.json</p> <p>Path to vulnerable library: amplify-cli/packages/amplify-cli/node_modules/npm/node_modules/ini/package.json,amplify-cli/packages/amplify-category-api/node_modules/npm/node_modules/ini/package.json,amplify-cli/packages/amplify-category-function/node_modules/npm/node_modules/ini/package.json,amplify-cli/packages/amplify-category-interactions/node_modules/npm/node_modules/ini/package.json</p> <p> Dependency Hierarchy: - grunt-aws-lambda-0.13.0.tgz (Root Library) - npm-2.15.12.tgz - :x: **ini-1.3.4.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/olivialancaster/amplify-cli/commit/cd0c44d979071e3e66901e1241487890136e13b8">cd0c44d979071e3e66901e1241487890136e13b8</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package ini before 1.3.6. If an attacker submits a malicious INI file to an application that parses it with ini.parse, they will pollute the prototype on the application. This can be exploited further depending on the context. <p>Publish Date: 2020-12-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7788>CVE-2020-7788</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788</a></p> <p>Release Date: 2020-12-11</p> <p>Fix Resolution: v1.3.6</p> </p> </details> <p></p>
True
CVE-2020-7788 (High) detected in ini-1.3.5.tgz, ini-1.3.4.tgz - ## CVE-2020-7788 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ini-1.3.5.tgz</b>, <b>ini-1.3.4.tgz</b></p></summary> <p> <details><summary><b>ini-1.3.5.tgz</b></p></summary> <p>An ini encoder/decoder for node</p> <p>Library home page: <a href="https://registry.npmjs.org/ini/-/ini-1.3.5.tgz">https://registry.npmjs.org/ini/-/ini-1.3.5.tgz</a></p> <p>Path to dependency file: amplify-cli/packages/amplify-cli/package.json</p> <p>Path to vulnerable library: amplify-cli/packages/amplify-codegen/node_modules/npm/node_modules/ini/package.json,amplify-cli/packages/amplify-codegen/node_modules/npm/node_modules/ini/package.json,amplify-cli/packages/amplify-codegen/node_modules/npm/node_modules/ini/package.json</p> <p> Dependency Hierarchy: - aws-appsync-codegen-0.17.5.tgz (Root Library) - npm-6.14.9.tgz - :x: **ini-1.3.5.tgz** (Vulnerable Library) </details> <details><summary><b>ini-1.3.4.tgz</b></p></summary> <p>An ini encoder/decoder for node</p> <p>Library home page: <a href="https://registry.npmjs.org/ini/-/ini-1.3.4.tgz">https://registry.npmjs.org/ini/-/ini-1.3.4.tgz</a></p> <p>Path to dependency file: amplify-cli/packages/amplify-cli/package.json</p> <p>Path to vulnerable library: amplify-cli/packages/amplify-cli/node_modules/npm/node_modules/ini/package.json,amplify-cli/packages/amplify-category-api/node_modules/npm/node_modules/ini/package.json,amplify-cli/packages/amplify-category-function/node_modules/npm/node_modules/ini/package.json,amplify-cli/packages/amplify-category-interactions/node_modules/npm/node_modules/ini/package.json</p> <p> Dependency Hierarchy: - grunt-aws-lambda-0.13.0.tgz (Root Library) - npm-2.15.12.tgz - :x: **ini-1.3.4.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/olivialancaster/amplify-cli/commit/cd0c44d979071e3e66901e1241487890136e13b8">cd0c44d979071e3e66901e1241487890136e13b8</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package ini before 1.3.6. If an attacker submits a malicious INI file to an application that parses it with ini.parse, they will pollute the prototype on the application. This can be exploited further depending on the context. <p>Publish Date: 2020-12-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7788>CVE-2020-7788</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788</a></p> <p>Release Date: 2020-12-11</p> <p>Fix Resolution: v1.3.6</p> </p> </details> <p></p>
non_main
cve high detected in ini tgz ini tgz cve high severity vulnerability vulnerable libraries ini tgz ini tgz ini tgz an ini encoder decoder for node library home page a href path to dependency file amplify cli packages amplify cli package json path to vulnerable library amplify cli packages amplify codegen node modules npm node modules ini package json amplify cli packages amplify codegen node modules npm node modules ini package json amplify cli packages amplify codegen node modules npm node modules ini package json dependency hierarchy aws appsync codegen tgz root library npm tgz x ini tgz vulnerable library ini tgz an ini encoder decoder for node library home page a href path to dependency file amplify cli packages amplify cli package json path to vulnerable library amplify cli packages amplify cli node modules npm node modules ini package json amplify cli packages amplify category api node modules npm node modules ini package json amplify cli packages amplify category function node modules npm node modules ini package json amplify cli packages amplify category interactions node modules npm node modules ini package json dependency hierarchy grunt aws lambda tgz root library npm tgz x ini tgz vulnerable library found in head commit a href vulnerability details this affects the package ini before if an attacker submits a malicious ini file to an application that parses it with ini parse they will pollute the prototype on the application this can be exploited further depending on the context publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
785,861
27,626,133,963
IssuesEvent
2023-03-10 06:58:47
ledd-23/crowdyy
https://api.github.com/repos/ledd-23/crowdyy
opened
[CRD-12] Intergrate ESLint into frontend workflow
enhancement frontend medium priority
**What:** Code analysis tool. **Why:** To adpot good style, so development can move fast in an orderly way. **AC:** ESLint configs working in VSCode.
1.0
[CRD-12] Intergrate ESLint into frontend workflow - **What:** Code analysis tool. **Why:** To adpot good style, so development can move fast in an orderly way. **AC:** ESLint configs working in VSCode.
non_main
intergrate eslint into frontend workflow what code analysis tool why to adpot good style so development can move fast in an orderly way ac eslint configs working in vscode
0
167,270
13,018,072,392
IssuesEvent
2020-07-26 15:34:43
qrdl/flightrec
https://api.github.com/repos/qrdl/flightrec
closed
Test script for expressions
testing
Add test script for viewng different kind of expressions: - struct members - array elements - result of logical or math operation
1.0
Test script for expressions - Add test script for viewng different kind of expressions: - struct members - array elements - result of logical or math operation
non_main
test script for expressions add test script for viewng different kind of expressions struct members array elements result of logical or math operation
0
3,742
15,713,141,818
IssuesEvent
2021-03-27 15:00:19
jbieliauskas/go-akeneo
https://api.github.com/repos/jbieliauskas/go-akeneo
closed
Create payload object for constructing map that's sent to Akeneo
maintainability
Some endpoints will need to take a struct and then generate a `map[string]interface{}` to add/omit certain fields if they're optional. Some operations will be duplicated, i.e. adding a sort order if it's positive or adding a slice if it's non-empty. Extract this to custom object that can be reused.
True
Create payload object for constructing map that's sent to Akeneo - Some endpoints will need to take a struct and then generate a `map[string]interface{}` to add/omit certain fields if they're optional. Some operations will be duplicated, i.e. adding a sort order if it's positive or adding a slice if it's non-empty. Extract this to custom object that can be reused.
main
create payload object for constructing map that s sent to akeneo some endpoints will need to take a struct and then generate a map interface to add omit certain fields if they re optional some operations will be duplicated i e adding a sort order if it s positive or adding a slice if it s non empty extract this to custom object that can be reused
1
817
4,441,895,460
IssuesEvent
2016-08-19 11:13:22
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
unarchive failed to unpack tar files
bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> unarchive ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> management node: centos 6.5 remote node: centos 6.5 ##### SUMMARY <!--- Explain the problem briefly --> unarchive module shows error when it handle gzip, bzip2 and xz compressed as well as uncompressed tar files using Ansible stable 2.1.0.0 version. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> unarchive: src: /usr/local/src/example.tar.gz dest: /usr/local/src creates: /usr/local/src/example/Makefile copy: no <!--- Paste example playbooks or commands between quotes below --> ``` ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> changed: [xxxxx] ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Unexpected error when accessing exploded file: [Errno 2] 没有那个文件或目录: '/usr/local/src/test.txt'"} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @test.retry <!--- Paste verbatim command output between quotes below --> ``` ```
True
unarchive failed to unpack tar files - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> unarchive ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> management node: centos 6.5 remote node: centos 6.5 ##### SUMMARY <!--- Explain the problem briefly --> unarchive module shows error when it handle gzip, bzip2 and xz compressed as well as uncompressed tar files using Ansible stable 2.1.0.0 version. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> unarchive: src: /usr/local/src/example.tar.gz dest: /usr/local/src creates: /usr/local/src/example/Makefile copy: no <!--- Paste example playbooks or commands between quotes below --> ``` ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> changed: [xxxxx] ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Unexpected error when accessing exploded file: [Errno 2] 没有那个文件或目录: '/usr/local/src/test.txt'"} NO MORE HOSTS LEFT ************************************************************* to retry, use: --limit @test.retry <!--- Paste verbatim command output between quotes below --> ``` ```
main
unarchive failed to unpack tar files issue type bug report component name unarchive ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific management node centos remote node centos summary unarchive module shows error when it handle gzip and xz compressed as well as uncompressed tar files using ansible stable version steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used unarchive src usr local src example tar gz dest usr local src creates usr local src example makefile copy no expected results changed actual results fatal failed changed false failed true msg unexpected error when accessing exploded file 没有那个文件或目录 usr local src test txt no more hosts left to retry use limit test retry
1
1,846
6,577,385,309
IssuesEvent
2017-09-12 00:32:34
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
user module should delete by UID not name
affects_1.9 feature_idea waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME user module ##### ANSIBLE VERSION ansible 1.9.4 ##### CONFIGURATION I am trying to clean up a username issue, that is a mess. Currently there are two names, but ansibile is not able to delete any of them because it is trying to delete by name not UID which is unique. ##### OS / ENVIRONMENT CentOS release 6.7 (Final) ##### SUMMARY I need to rename a username attached to a UID from old to new. While there is alo a name by the same new name present, if I am able to make chagnes based on the UID then ansible would not get confused. ##### STEPS TO REPRODUCE failed: [server] => {"failed": true, "name": "username", "rc": 1} msg: Multiple entries named 'username' in /etc/passwd. Please fix this with pwck or grpck. userdel: cannot remove entry 'username' from /etc/passwd ##### EXPECTED RESULTS I deleted username with UID ... ##### ACTUAL RESULTS it complains when there are two users by the same name
True
user module should delete by UID not name - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME user module ##### ANSIBLE VERSION ansible 1.9.4 ##### CONFIGURATION I am trying to clean up a username issue, that is a mess. Currently there are two names, but ansibile is not able to delete any of them because it is trying to delete by name not UID which is unique. ##### OS / ENVIRONMENT CentOS release 6.7 (Final) ##### SUMMARY I need to rename a username attached to a UID from old to new. While there is alo a name by the same new name present, if I am able to make chagnes based on the UID then ansible would not get confused. ##### STEPS TO REPRODUCE failed: [server] => {"failed": true, "name": "username", "rc": 1} msg: Multiple entries named 'username' in /etc/passwd. Please fix this with pwck or grpck. userdel: cannot remove entry 'username' from /etc/passwd ##### EXPECTED RESULTS I deleted username with UID ... ##### ACTUAL RESULTS it complains when there are two users by the same name
main
user module should delete by uid not name issue type feature idea component name user module ansible version ansible configuration i am trying to clean up a username issue that is a mess currently there are two names but ansibile is not able to delete any of them because it is trying to delete by name not uid which is unique os environment centos release final summary i need to rename a username attached to a uid from old to new while there is alo a name by the same new name present if i am able to make chagnes based on the uid then ansible would not get confused steps to reproduce failed failed true name username rc msg multiple entries named username in etc passwd please fix this with pwck or grpck userdel cannot remove entry username from etc passwd expected results i deleted username with uid actual results it complains when there are two users by the same name
1
95,087
10,865,686,026
IssuesEvent
2019-11-14 19:32:40
AIR-FOI-HR/AIR1925
https://api.github.com/repos/AIR-FOI-HR/AIR1925
opened
Razrada koraka projektnih iteracija
documentation
Detaljno razrađivanje funkcionalnosti iz Backloga u potencijalne taskove te vremensko planiranje izvođenja istih taskova kroz Scrum sprinteve.
1.0
Razrada koraka projektnih iteracija - Detaljno razrađivanje funkcionalnosti iz Backloga u potencijalne taskove te vremensko planiranje izvođenja istih taskova kroz Scrum sprinteve.
non_main
razrada koraka projektnih iteracija detaljno razrađivanje funkcionalnosti iz backloga u potencijalne taskove te vremensko planiranje izvođenja istih taskova kroz scrum sprinteve
0
5,720
30,235,245,966
IssuesEvent
2023-07-06 09:45:45
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Release: Integrate update of identity version into the release process
kind/toil area/maintainability release/8.3.0-alpha2
**Description** Since the merge of https://github.com/camunda/zeebe/pull/12001 zeebe depends on identity for all 8.2.0 release and later. As an identity release may happen right before the zeebe release we need to add a step to the release process to make sure the identity version is the same as the zeebe version or if not update it. Right now patch level version of zeebe and identity are aligned, so if zeebe 8.2.5 is released the identity version should be 8.2.5 as well. This could be automated. ```[tasklist] ### Tasks - [ ] https://github.com/camunda/zeebe/issues/12920 - [ ] https://github.com/zeebe-io/zeebe-engineering-processes/issues/312 ```
True
Release: Integrate update of identity version into the release process - **Description** Since the merge of https://github.com/camunda/zeebe/pull/12001 zeebe depends on identity for all 8.2.0 release and later. As an identity release may happen right before the zeebe release we need to add a step to the release process to make sure the identity version is the same as the zeebe version or if not update it. Right now patch level version of zeebe and identity are aligned, so if zeebe 8.2.5 is released the identity version should be 8.2.5 as well. This could be automated. ```[tasklist] ### Tasks - [ ] https://github.com/camunda/zeebe/issues/12920 - [ ] https://github.com/zeebe-io/zeebe-engineering-processes/issues/312 ```
main
release integrate update of identity version into the release process description since the merge of zeebe depends on identity for all release and later as an identity release may happen right before the zeebe release we need to add a step to the release process to make sure the identity version is the same as the zeebe version or if not update it right now patch level version of zeebe and identity are aligned so if zeebe is released the identity version should be as well this could be automated tasks
1
1,910
6,577,571,872
IssuesEvent
2017-09-12 01:50:38
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
cloud/docker: update doc for field 'registry'
affects_2.1 cloud docker docs_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> Documentation Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> docker ##### ANSIBLE VERSION ``` ansible 2.1.0 (devel 22467a0de8) last updated 2016/04/13 11:42:21 (GMT +200) lib/ansible/modules/core: (detached HEAD 99cd31140d) last updated 2016/04/13 11:42:31 (GMT +200) lib/ansible/modules/extras: (detached HEAD ab2f4c4002) last updated 2016/04/13 11:42:40 (GMT +200) config file = configured module search path = Default w/o overrides ``` ##### SUMMARY docker module docs describes field `registry` as the "Remote registry URL to pull images from". However, I think this field's only use is for login, not pulling, so the doc is misleading. See the issue I opened on that subject (#3419). It would be nice if the doc could be fixed.
True
cloud/docker: update doc for field 'registry' - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> Documentation Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> docker ##### ANSIBLE VERSION ``` ansible 2.1.0 (devel 22467a0de8) last updated 2016/04/13 11:42:21 (GMT +200) lib/ansible/modules/core: (detached HEAD 99cd31140d) last updated 2016/04/13 11:42:31 (GMT +200) lib/ansible/modules/extras: (detached HEAD ab2f4c4002) last updated 2016/04/13 11:42:40 (GMT +200) config file = configured module search path = Default w/o overrides ``` ##### SUMMARY docker module docs describes field `registry` as the "Remote registry URL to pull images from". However, I think this field's only use is for login, not pulling, so the doc is misleading. See the issue I opened on that subject (#3419). It would be nice if the doc could be fixed.
main
cloud docker update doc for field registry issue type documentation report component name docker ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides summary docker module docs describes field registry as the remote registry url to pull images from however i think this field s only use is for login not pulling so the doc is misleading see the issue i opened on that subject it would be nice if the doc could be fixed
1
500,946
14,517,459,676
IssuesEvent
2020-12-13 19:41:44
ansible/awx
https://api.github.com/repos/ansible/awx
closed
1/1000 login attempts from the ui_next server returns HTTP 400 from /api/login
component:ui_next priority:low qe:hit state:needs_devel type:bug
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - API - UI ##### SUMMARY 1/1000 Login attempts fails on the ui_next dev server. The /api/login/ returns HTTP 400 The error from our testrunner is included under ADDITIONAL INFORMATION. ##### ENVIRONMENT * AWX version: 10+ * AWX install method: docker on linux, ui_next dev server ##### STEPS TO REPRODUCE Our UI Tests fail to log in 1/1000 attempts. We finally implemented logging to capture this failed login attempt. I honestly don't have a good way to reproduce. So I'll include the error as thrown from the test runner and also the sosreport. ##### ADDITIONAL INFORMATION ``` CypressError: cy.request() failed on: https://ui-next:3001/api/login/ The response we received from your web server was: > 400: Bad Request This was considered a failure because the status code was not '2xx' or '3xx'. If you do not want status codes to cause failures pass the option: 'failOnStatusCode: false' ----------------------------------------------------------- The request we sent was: Method: POST URL: https://ui-next:3001/api/login/ Headers: { "Connection": "keep-alive", "referer": "https://ui-next:3001/api/login/", "x-csrftoken": "H7vtWbsxhFeIEPBTKADe46l86MWP9yIBTJFk8zovaK2GBJOw4oFviRkWH1Gc6X66", "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 KHTML, like Gecko Chrome/79.0.3945.130 Safari/537.36", "accept": "*/*", "cookie": "sessionid=a2tdbz2di457la37vfirh34cptd44o6x; csrftoken=H7vtWbsxhFeIEPBTKADe46l86MWP9yIBTJFk8zovaK2GBJOw4oFviRkWH1Gc6X66", "content-type": "application/x-www-form-urlencoded", "accept-encoding": "gzip, deflate", "content-length": 155 } Body: username=admin&password=afXy%40tF%5Dl%2Cz%26%3EF%3F%3FH&next=%2Fapi%2F&csrfmiddlewaretoken=H7vtWbsxhFeIEPBTKADe46l86MWP9yIBTJFk8zovaK2GBJOw4oFviRkWH1Gc6X66 ----------------------------------------------------------- The response we got was: Status: 400 - Bad Request Headers: { "x-powered-by": "Express", "server": "nginx", "date": "Tue, 26 May 2020 15:08:34 GMT", "content-type": "application/json", "transfer-encoding": "chunked", "connection": "close", "vary": "Accept, Accept-Encoding", "x-api-total-time": "0.184s", "content-encoding": "gzip" } Body: { "detail": "The request could not be understood by the server." } Because this error occurred during a 'before each' hook we are skipping all of the remaining tests. at Object.cypressErr (https://ui-next:3001/__cypress/runner/cypress_runner.js:86207:11) at Object.throwErr (https://ui-next:3001/__cypress/runner/cypress_runner.js:86162:18) at Object.throwErrByPath (https://ui-next:3001/__cypress/runner/cypress_runner.js:86194:17) at https://ui-next:3001/__cypress/runner/cypress_runner.js:72496:18 at tryCatcher (https://ui-next:3001/__cypress/runner/cypress_runner.js:120203:23) at Promise._settlePromiseFromHandler (https://ui-next:3001/__cypress/runner/cypress_runner.js:118139:31) at Promise._settlePromise (https://ui-next:3001/__cypress/runner/cypress_runner.js:118196:18) at Promise._settlePromise0 (https://ui-next:3001/__cypress/runner/cypress_runner.js:118241:10) at Promise._settlePromises (https://ui-next:3001/__cypress/runner/cypress_runner.js:118320:18) at Async../node_modules/bluebird/js/release/async.js.Async._drainQueue (https://ui-next:3001/__cypress/runner/cypress_runner.js:114928:16) at Async../node_modules/bluebird/js/release/async.js.Async._drainQueues (https://ui-next:3001/__cypress/runner/cypress_runner.js:114938:10) at Async.drainQueues (https://ui-next:3001/__cypress/runner/cypress_runner.js:114812:14) ``` [all_tower_sos_reports (3).tar.gz](https://github.com/ansible/awx/files/4689021/all_tower_sos_reports.3.tar.gz)
1.0
1/1000 login attempts from the ui_next server returns HTTP 400 from /api/login - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME - API - UI ##### SUMMARY 1/1000 Login attempts fails on the ui_next dev server. The /api/login/ returns HTTP 400 The error from our testrunner is included under ADDITIONAL INFORMATION. ##### ENVIRONMENT * AWX version: 10+ * AWX install method: docker on linux, ui_next dev server ##### STEPS TO REPRODUCE Our UI Tests fail to log in 1/1000 attempts. We finally implemented logging to capture this failed login attempt. I honestly don't have a good way to reproduce. So I'll include the error as thrown from the test runner and also the sosreport. ##### ADDITIONAL INFORMATION ``` CypressError: cy.request() failed on: https://ui-next:3001/api/login/ The response we received from your web server was: > 400: Bad Request This was considered a failure because the status code was not '2xx' or '3xx'. If you do not want status codes to cause failures pass the option: 'failOnStatusCode: false' ----------------------------------------------------------- The request we sent was: Method: POST URL: https://ui-next:3001/api/login/ Headers: { "Connection": "keep-alive", "referer": "https://ui-next:3001/api/login/", "x-csrftoken": "H7vtWbsxhFeIEPBTKADe46l86MWP9yIBTJFk8zovaK2GBJOw4oFviRkWH1Gc6X66", "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 KHTML, like Gecko Chrome/79.0.3945.130 Safari/537.36", "accept": "*/*", "cookie": "sessionid=a2tdbz2di457la37vfirh34cptd44o6x; csrftoken=H7vtWbsxhFeIEPBTKADe46l86MWP9yIBTJFk8zovaK2GBJOw4oFviRkWH1Gc6X66", "content-type": "application/x-www-form-urlencoded", "accept-encoding": "gzip, deflate", "content-length": 155 } Body: username=admin&password=afXy%40tF%5Dl%2Cz%26%3EF%3F%3FH&next=%2Fapi%2F&csrfmiddlewaretoken=H7vtWbsxhFeIEPBTKADe46l86MWP9yIBTJFk8zovaK2GBJOw4oFviRkWH1Gc6X66 ----------------------------------------------------------- The response we got was: Status: 400 - Bad Request Headers: { "x-powered-by": "Express", "server": "nginx", "date": "Tue, 26 May 2020 15:08:34 GMT", "content-type": "application/json", "transfer-encoding": "chunked", "connection": "close", "vary": "Accept, Accept-Encoding", "x-api-total-time": "0.184s", "content-encoding": "gzip" } Body: { "detail": "The request could not be understood by the server." } Because this error occurred during a 'before each' hook we are skipping all of the remaining tests. at Object.cypressErr (https://ui-next:3001/__cypress/runner/cypress_runner.js:86207:11) at Object.throwErr (https://ui-next:3001/__cypress/runner/cypress_runner.js:86162:18) at Object.throwErrByPath (https://ui-next:3001/__cypress/runner/cypress_runner.js:86194:17) at https://ui-next:3001/__cypress/runner/cypress_runner.js:72496:18 at tryCatcher (https://ui-next:3001/__cypress/runner/cypress_runner.js:120203:23) at Promise._settlePromiseFromHandler (https://ui-next:3001/__cypress/runner/cypress_runner.js:118139:31) at Promise._settlePromise (https://ui-next:3001/__cypress/runner/cypress_runner.js:118196:18) at Promise._settlePromise0 (https://ui-next:3001/__cypress/runner/cypress_runner.js:118241:10) at Promise._settlePromises (https://ui-next:3001/__cypress/runner/cypress_runner.js:118320:18) at Async../node_modules/bluebird/js/release/async.js.Async._drainQueue (https://ui-next:3001/__cypress/runner/cypress_runner.js:114928:16) at Async../node_modules/bluebird/js/release/async.js.Async._drainQueues (https://ui-next:3001/__cypress/runner/cypress_runner.js:114938:10) at Async.drainQueues (https://ui-next:3001/__cypress/runner/cypress_runner.js:114812:14) ``` [all_tower_sos_reports (3).tar.gz](https://github.com/ansible/awx/files/4689021/all_tower_sos_reports.3.tar.gz)
non_main
login attempts from the ui next server returns http from api login issue type bug report component name api ui summary login attempts fails on the ui next dev server the api login returns http the error from our testrunner is included under additional information environment awx version awx install method docker on linux ui next dev server steps to reproduce our ui tests fail to log in attempts we finally implemented logging to capture this failed login attempt i honestly don t have a good way to reproduce so i ll include the error as thrown from the test runner and also the sosreport additional information cypresserror cy request failed on the response we received from your web server was bad request this was considered a failure because the status code was not or if you do not want status codes to cause failures pass the option failonstatuscode false the request we sent was method post url headers connection keep alive referer x csrftoken user agent mozilla linux applewebkit khtml like gecko chrome safari accept cookie sessionid csrftoken content type application x www form urlencoded accept encoding gzip deflate content length body username admin password afxy next csrfmiddlewaretoken the response we got was status bad request headers x powered by express server nginx date tue may gmt content type application json transfer encoding chunked connection close vary accept accept encoding x api total time content encoding gzip body detail the request could not be understood by the server because this error occurred during a before each hook we are skipping all of the remaining tests at object cypresserr at object throwerr at object throwerrbypath at at trycatcher at promise settlepromisefromhandler at promise settlepromise at promise at promise settlepromises at async node modules bluebird js release async js async drainqueue at async node modules bluebird js release async js async drainqueues at async drainqueues
0
522,198
15,158,151,628
IssuesEvent
2021-02-12 00:27:09
NOAA-GSL/MATS
https://api.github.com/repos/NOAA-GSL/MATS
closed
Show/hide curve/points/etc not carrying through to popouts
Priority: Medium Project: MATS Status: Closed Type: Bug
--- Author Name: **molly.b.smith** (@mollybsmith-noaa) Original Redmine Issue: 61658, https://vlab.ncep.noaa.gov/redmine/issues/61658 Original Date: 2019-03-22 Original Assignee: molly.b.smith --- Bonny found that the pupout windows aren't keeping the user's show/hide settings.
1.0
Show/hide curve/points/etc not carrying through to popouts - --- Author Name: **molly.b.smith** (@mollybsmith-noaa) Original Redmine Issue: 61658, https://vlab.ncep.noaa.gov/redmine/issues/61658 Original Date: 2019-03-22 Original Assignee: molly.b.smith --- Bonny found that the pupout windows aren't keeping the user's show/hide settings.
non_main
show hide curve points etc not carrying through to popouts author name molly b smith mollybsmith noaa original redmine issue original date original assignee molly b smith bonny found that the pupout windows aren t keeping the user s show hide settings
0
1,953
6,666,663,538
IssuesEvent
2017-10-03 09:15:00
reactiveui/ReactiveUI
https://api.github.com/repos/reactiveui/ReactiveUI
closed
MSBuild binlog artifacts aren't being pushed if the build step fails
contributor-experience housekeeping maintainer-experience up-for-grabs
**Do you want to request a *feature* or report a *bug*?** feature **What is the current behavior?** msbuild `binlogs` (msbuildlog.com) are not uploaded to AppVeyor if build steps fail. **What is the expected behavior?** * msbuild `binlogs` are uploaded to AppVeyor always. * https://www.appveyor.com/docs/packaging-artifacts/
True
MSBuild binlog artifacts aren't being pushed if the build step fails - **Do you want to request a *feature* or report a *bug*?** feature **What is the current behavior?** msbuild `binlogs` (msbuildlog.com) are not uploaded to AppVeyor if build steps fail. **What is the expected behavior?** * msbuild `binlogs` are uploaded to AppVeyor always. * https://www.appveyor.com/docs/packaging-artifacts/
main
msbuild binlog artifacts aren t being pushed if the build step fails do you want to request a feature or report a bug feature what is the current behavior msbuild binlogs msbuildlog com are not uploaded to appveyor if build steps fail what is the expected behavior msbuild binlogs are uploaded to appveyor always
1
684,358
23,415,369,085
IssuesEvent
2022-08-12 23:41:25
panel-attack/panel-attack
https://api.github.com/repos/panel-attack/panel-attack
closed
1P modes crashing if random character/stage is saved in config
bug Client-side high priority
Repro: 1. Go into 1p vs yourself, set both character and stage to random 2. Enter 1p time attack I strongly suspect this is due to the values getting saved back into the config differently before after refactoring select_screen. Error main.lua:102: graphics.lua:291: attempt to index a nil value { ["release_version"] = beta-2022-08-10_03-41-28, ["name"] = Endaris, ["love_version"] = 11.4.0, ["error"] = graphics.lua:291: attempt to index a nil value, ["stack"] = stack traceback: graphics.lua: in function 'render' match.lua:472: in function 'render' mainloop.lua:456: in function 'func' mainloop.lua:82: in function <mainloop.lua:29>, ["operating_system"] = OS: Windows, ["engine_version"] = 046, } Traceback [love "callbacks.lua"]:228: in function 'handler' [C]: in function 'error' main.lua:102: in function 'update' [love "callbacks.lua"]:162: in function <[love "callbacks.lua"]:144> [C]: in function 'xpcall'
1.0
1P modes crashing if random character/stage is saved in config - Repro: 1. Go into 1p vs yourself, set both character and stage to random 2. Enter 1p time attack I strongly suspect this is due to the values getting saved back into the config differently before after refactoring select_screen. Error main.lua:102: graphics.lua:291: attempt to index a nil value { ["release_version"] = beta-2022-08-10_03-41-28, ["name"] = Endaris, ["love_version"] = 11.4.0, ["error"] = graphics.lua:291: attempt to index a nil value, ["stack"] = stack traceback: graphics.lua: in function 'render' match.lua:472: in function 'render' mainloop.lua:456: in function 'func' mainloop.lua:82: in function <mainloop.lua:29>, ["operating_system"] = OS: Windows, ["engine_version"] = 046, } Traceback [love "callbacks.lua"]:228: in function 'handler' [C]: in function 'error' main.lua:102: in function 'update' [love "callbacks.lua"]:162: in function <[love "callbacks.lua"]:144> [C]: in function 'xpcall'
non_main
modes crashing if random character stage is saved in config repro go into vs yourself set both character and stage to random enter time attack i strongly suspect this is due to the values getting saved back into the config differently before after refactoring select screen error main lua graphics lua attempt to index a nil value beta endaris graphics lua attempt to index a nil value stack traceback graphics lua in function render match lua in function render mainloop lua in function func mainloop lua in function os windows traceback in function handler in function error main lua in function update in function in function xpcall
0
1,640
6,572,662,037
IssuesEvent
2017-09-11 04:11:16
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
zabbix_host fails if called to create/update a host when "force: false"
affects_2.2 bug_report waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME zabbix_host ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT N/A ##### SUMMARY The module "zabbix_host" fails if called to create/update a host on Zabbix when "force: false". ##### STEPS TO REPRODUCE 1. Make sure the host you are trying to create/update already exists on Zabbix server; 2. Use the "zabbix_host" module with "force: false". ``` --- - name: register host on Zabbix server local_action: module: zabbix_host server_url: http://zabbix.example.com/zabbix login_user: ansible login_password: superSecret host_name: "{{ ansible_fqdn }}" host_groups: - Discovered hosts link_templates: - Template OS Linux status: disabled state: present force: false interfaces: - type: 1 main: 1 useip: 1 ip: "{{ ansible_default_ipv4.address }}" dns: "{{ ansible_fqdn }}" port: 10050 ... ``` ##### EXPECTED RESULTS When trying to register a host that is already registered and with "force: false", I expected that the zabbix_host module would just return gracefuly with something like "failed: false" and "changed: false". ##### ACTUAL RESULTS If the host already exists on Zabbix server, the playbook exits with a "MODULE FAILURE". If the host does not already exist, the playbook is executed correclty and the host is created as expected. Here is the output of ansible-playbook with "-vvv": ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_1baBu5/ansible_module_zabbix_host.py", line 562, in <module> main() File "/tmp/ansible_1baBu5/ansible_module_zabbix_host.py", line 506, in main module.fail_json(changed=False, result="Host present, Can't update configuration without force") File "/tmp/ansible_1baBu5/ansible_modlib.zip/ansible/module_utils/basic.py", line 1807, in fail_json AssertionError: implementation error -- msg to explain the error is required fatal: [teste-rhel6 -> localhost]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_name": "zabbix_host" }, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_1baBu5/ansible_module_zabbix_host.py\", line 562, in <module>\n main()\n File \"/tmp/ansible_1baBu5/ansible_module_zabbix_host.py\", line 506, in main\n module.fail_json(changed=False, result=\"Host present, Can't update configuration without force\")\n File \"/tmp/ansible_1baBu5/ansible_modlib.zip/ansible/module_utils/basic.py\", line 1807, in fail_json\nAssertionError: implementation error -- msg to explain the error is required\n", "module_stdout": "", "msg": "MODULE FAILURE" } ```
True
zabbix_host fails if called to create/update a host when "force: false" - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME zabbix_host ##### ANSIBLE VERSION ``` ansible 2.2.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION None ##### OS / ENVIRONMENT N/A ##### SUMMARY The module "zabbix_host" fails if called to create/update a host on Zabbix when "force: false". ##### STEPS TO REPRODUCE 1. Make sure the host you are trying to create/update already exists on Zabbix server; 2. Use the "zabbix_host" module with "force: false". ``` --- - name: register host on Zabbix server local_action: module: zabbix_host server_url: http://zabbix.example.com/zabbix login_user: ansible login_password: superSecret host_name: "{{ ansible_fqdn }}" host_groups: - Discovered hosts link_templates: - Template OS Linux status: disabled state: present force: false interfaces: - type: 1 main: 1 useip: 1 ip: "{{ ansible_default_ipv4.address }}" dns: "{{ ansible_fqdn }}" port: 10050 ... ``` ##### EXPECTED RESULTS When trying to register a host that is already registered and with "force: false", I expected that the zabbix_host module would just return gracefuly with something like "failed: false" and "changed: false". ##### ACTUAL RESULTS If the host already exists on Zabbix server, the playbook exits with a "MODULE FAILURE". If the host does not already exist, the playbook is executed correclty and the host is created as expected. Here is the output of ansible-playbook with "-vvv": ``` An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/tmp/ansible_1baBu5/ansible_module_zabbix_host.py", line 562, in <module> main() File "/tmp/ansible_1baBu5/ansible_module_zabbix_host.py", line 506, in main module.fail_json(changed=False, result="Host present, Can't update configuration without force") File "/tmp/ansible_1baBu5/ansible_modlib.zip/ansible/module_utils/basic.py", line 1807, in fail_json AssertionError: implementation error -- msg to explain the error is required fatal: [teste-rhel6 -> localhost]: FAILED! => { "changed": false, "failed": true, "invocation": { "module_name": "zabbix_host" }, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_1baBu5/ansible_module_zabbix_host.py\", line 562, in <module>\n main()\n File \"/tmp/ansible_1baBu5/ansible_module_zabbix_host.py\", line 506, in main\n module.fail_json(changed=False, result=\"Host present, Can't update configuration without force\")\n File \"/tmp/ansible_1baBu5/ansible_modlib.zip/ansible/module_utils/basic.py\", line 1807, in fail_json\nAssertionError: implementation error -- msg to explain the error is required\n", "module_stdout": "", "msg": "MODULE FAILURE" } ```
main
zabbix host fails if called to create update a host when force false issue type bug report component name zabbix host ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration none os environment n a summary the module zabbix host fails if called to create update a host on zabbix when force false steps to reproduce make sure the host you are trying to create update already exists on zabbix server use the zabbix host module with force false name register host on zabbix server local action module zabbix host server url login user ansible login password supersecret host name ansible fqdn host groups discovered hosts link templates template os linux status disabled state present force false interfaces type main useip ip ansible default address dns ansible fqdn port expected results when trying to register a host that is already registered and with force false i expected that the zabbix host module would just return gracefuly with something like failed false and changed false actual results if the host already exists on zabbix server the playbook exits with a module failure if the host does not already exist the playbook is executed correclty and the host is created as expected here is the output of ansible playbook with vvv an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module zabbix host py line in main file tmp ansible ansible module zabbix host py line in main module fail json changed false result host present can t update configuration without force file tmp ansible ansible modlib zip ansible module utils basic py line in fail json assertionerror implementation error msg to explain the error is required fatal failed changed false failed true invocation module name zabbix host module stderr traceback most recent call last n file tmp ansible ansible module zabbix host py line in n main n file tmp ansible ansible module zabbix host py line in main n module fail json changed false result host present can t update configuration without force n file tmp ansible ansible modlib zip ansible module utils basic py line in fail json nassertionerror implementation error msg to explain the error is required n module stdout msg module failure
1
257,300
19,512,028,653
IssuesEvent
2021-12-29 00:57:58
horizongir/opencv.net
https://api.github.com/repos/horizongir/opencv.net
closed
Generate reference documentation for the entire API
documentation
Although OpenCV.NET aims to be as close to the original OpenCV as possible, there are still important differences and trade-offs which benefit from explanation. Furthermore, it is not clear to new users exactly which functions are currently supported by OpenCV.NET and how to use them. Generation of documentation websites for .NET projects has recently started to converge on [DocFX](https://dotnet.github.io/docfx/) as a standard, which allows static documentation to be generated next to the project code, with support for custom markdown pages and cross-references. It would be great to include a brand new documentation website in the next version.
1.0
Generate reference documentation for the entire API - Although OpenCV.NET aims to be as close to the original OpenCV as possible, there are still important differences and trade-offs which benefit from explanation. Furthermore, it is not clear to new users exactly which functions are currently supported by OpenCV.NET and how to use them. Generation of documentation websites for .NET projects has recently started to converge on [DocFX](https://dotnet.github.io/docfx/) as a standard, which allows static documentation to be generated next to the project code, with support for custom markdown pages and cross-references. It would be great to include a brand new documentation website in the next version.
non_main
generate reference documentation for the entire api although opencv net aims to be as close to the original opencv as possible there are still important differences and trade offs which benefit from explanation furthermore it is not clear to new users exactly which functions are currently supported by opencv net and how to use them generation of documentation websites for net projects has recently started to converge on as a standard which allows static documentation to be generated next to the project code with support for custom markdown pages and cross references it would be great to include a brand new documentation website in the next version
0
466,671
13,431,057,292
IssuesEvent
2020-09-07 06:16:34
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
docs.google.com - see bug description
browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical
<!-- @browser: Firefox 65.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/57820 --> **URL**: https://docs.google.com/forms/d/e/1FAIpQLScuoqqSG8ho4fYK8HGhXeOsEDhGAs0cKV_P5ygTy0KtK_7Jhg/viewform?vc=0&c=0&w=1&flr=0&usp=mail_form_link **Browser / Version**: Firefox 65.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: cannot tipe in it **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190211233335</li><li>channel: release</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/9/fc8ee523-2158-4fe8-9a75-302a8c7e4a6a) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
docs.google.com - see bug description - <!-- @browser: Firefox 65.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0 --> <!-- @reported_with: desktop-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/57820 --> **URL**: https://docs.google.com/forms/d/e/1FAIpQLScuoqqSG8ho4fYK8HGhXeOsEDhGAs0cKV_P5ygTy0KtK_7Jhg/viewform?vc=0&c=0&w=1&flr=0&usp=mail_form_link **Browser / Version**: Firefox 65.0 **Operating System**: Windows 10 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: cannot tipe in it **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20190211233335</li><li>channel: release</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/9/fc8ee523-2158-4fe8-9a75-302a8c7e4a6a) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_main
docs google com see bug description url browser version firefox operating system windows tested another browser yes chrome problem type something else description cannot tipe in it steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel release hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
0
197,620
6,962,095,444
IssuesEvent
2017-12-08 12:15:44
prometheus/prometheus
https://api.github.com/repos/prometheus/prometheus
closed
Prometheus UI doesn't show metrics read from remote storage in the drop down
component/remote storage kind/enhancement priority/Pmaybe
<!-- Please do *NOT* ask usage questions in Github issues. If your issue is not a feature request or bug report use: https://groups.google.com/forum/#!forum/prometheus-users. If you are unsure whether you hit a bug, search and ask in the mailing list first. You can find more information at: https://prometheus.io/community/ --> **What did you do?** Set up pg_prometheus, pg_prometheus_adapter with prometheus. Configured for remote_read and remote_write config.yml Followed instructions from : https://github.com/timescale/prometheus-postgresql-adapter **What did you expect to see?** Expected the remotely read metrics to show up in Prometheus UI. Only scraped metrics show up in the UI. Even in the Grafana integration with prometheus, only scraped metrics show up in auto complete. **What did you see instead? Under which circumstances?** Metrics read from remote storage don't show up. However, executed query to fetch metric using query editor. Results show up. The drop down doesn't show this metric at all. **Environment** Mac OS X * System information: Darwin 16.7.0 x86_64 * Prometheus version: prometheus, version 2.0.0 (branch: master, revision: 607a6756172618ef199bf63f5413e68587f458da) * Prometheus configuration file: ``` # my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: source: 'codelab-monitor' remote_write: - url: "http://172.17.0.3:9201/write" remote_timeout: 60s remote_read: - url: "http://172.17.0.3:9201/read" remote_timeout: 60s read_recent: false # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'my_job' honor_labels: true scrape_interval: 60s scrape_timeout: 30s static_configs: - targets: - 'docker.for.mac.localhost:9091' ```
1.0
Prometheus UI doesn't show metrics read from remote storage in the drop down - <!-- Please do *NOT* ask usage questions in Github issues. If your issue is not a feature request or bug report use: https://groups.google.com/forum/#!forum/prometheus-users. If you are unsure whether you hit a bug, search and ask in the mailing list first. You can find more information at: https://prometheus.io/community/ --> **What did you do?** Set up pg_prometheus, pg_prometheus_adapter with prometheus. Configured for remote_read and remote_write config.yml Followed instructions from : https://github.com/timescale/prometheus-postgresql-adapter **What did you expect to see?** Expected the remotely read metrics to show up in Prometheus UI. Only scraped metrics show up in the UI. Even in the Grafana integration with prometheus, only scraped metrics show up in auto complete. **What did you see instead? Under which circumstances?** Metrics read from remote storage don't show up. However, executed query to fetch metric using query editor. Results show up. The drop down doesn't show this metric at all. **Environment** Mac OS X * System information: Darwin 16.7.0 x86_64 * Prometheus version: prometheus, version 2.0.0 (branch: master, revision: 607a6756172618ef199bf63f5413e68587f458da) * Prometheus configuration file: ``` # my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: source: 'codelab-monitor' remote_write: - url: "http://172.17.0.3:9201/write" remote_timeout: 60s remote_read: - url: "http://172.17.0.3:9201/read" remote_timeout: 60s read_recent: false # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'my_job' honor_labels: true scrape_interval: 60s scrape_timeout: 30s static_configs: - targets: - 'docker.for.mac.localhost:9091' ```
non_main
prometheus ui doesn t show metrics read from remote storage in the drop down please do not ask usage questions in github issues if your issue is not a feature request or bug report use if you are unsure whether you hit a bug search and ask in the mailing list first you can find more information at what did you do set up pg prometheus pg prometheus adapter with prometheus configured for remote read and remote write config yml followed instructions from what did you expect to see expected the remotely read metrics to show up in prometheus ui only scraped metrics show up in the ui even in the grafana integration with prometheus only scraped metrics show up in auto complete what did you see instead under which circumstances metrics read from remote storage don t show up however executed query to fetch metric using query editor results show up the drop down doesn t show this metric at all environment mac os x system information darwin prometheus version prometheus version branch master revision prometheus configuration file my global config global scrape interval set the scrape interval to every seconds default is every minute evaluation interval evaluate rules every seconds the default is every minute scrape timeout is set to the global default attach these labels to any time series or alerts when communicating with external systems federation remote storage alertmanager external labels source codelab monitor remote write url remote timeout remote read url remote timeout read recent false a scrape configuration containing exactly one endpoint to scrape here it s prometheus itself scrape configs the job name is added as a label job to any timeseries scraped from this config job name my job honor labels true scrape interval scrape timeout static configs targets docker for mac localhost
0
37,232
12,473,809,401
IssuesEvent
2020-05-29 08:31:22
Kalskiman/gentelella
https://api.github.com/repos/Kalskiman/gentelella
opened
CVE-2017-16113 (High) detected in parsejson-0.0.3.tgz
security vulnerability
## CVE-2017-16113 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parsejson-0.0.3.tgz</b></p></summary> <p>Method that parses a JSON string and returns a JSON object</p> <p>Library home page: <a href="https://registry.npmjs.org/parsejson/-/parsejson-0.0.3.tgz">https://registry.npmjs.org/parsejson/-/parsejson-0.0.3.tgz</a></p> <p>Path to dependency file: /tmp/ws-ua_20200529074747_DVXLIQ/archiveExtraction_FGLLEN/20200529074747/ws-scm_depth_0/gentelella/vendors/flot/flot-2.1.3/package/package.json</p> <p>Path to vulnerable library: /tmp/ws-ua_20200529074747_DVXLIQ/archiveExtraction_FGLLEN/20200529074747/ws-scm_depth_0/gentelella/vendors/flot/flot-2.1.3/package/node_modules/parsejson/package.json</p> <p> Dependency Hierarchy: - karma-1.7.1.tgz (Root Library) - socket.io-1.7.3.tgz - socket.io-client-1.7.3.tgz - engine.io-client-1.8.3.tgz - :x: **parsejson-0.0.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Kalskiman/gentelella/commit/0736072b46adcf2ceef588bb8660b4851929bc43">0736072b46adcf2ceef588bb8660b4851929bc43</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The parsejson module is vulnerable to regular expression denial of service when untrusted user input is passed into it to be parsed. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16113>CVE-2017-16113</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-16113 (High) detected in parsejson-0.0.3.tgz - ## CVE-2017-16113 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parsejson-0.0.3.tgz</b></p></summary> <p>Method that parses a JSON string and returns a JSON object</p> <p>Library home page: <a href="https://registry.npmjs.org/parsejson/-/parsejson-0.0.3.tgz">https://registry.npmjs.org/parsejson/-/parsejson-0.0.3.tgz</a></p> <p>Path to dependency file: /tmp/ws-ua_20200529074747_DVXLIQ/archiveExtraction_FGLLEN/20200529074747/ws-scm_depth_0/gentelella/vendors/flot/flot-2.1.3/package/package.json</p> <p>Path to vulnerable library: /tmp/ws-ua_20200529074747_DVXLIQ/archiveExtraction_FGLLEN/20200529074747/ws-scm_depth_0/gentelella/vendors/flot/flot-2.1.3/package/node_modules/parsejson/package.json</p> <p> Dependency Hierarchy: - karma-1.7.1.tgz (Root Library) - socket.io-1.7.3.tgz - socket.io-client-1.7.3.tgz - engine.io-client-1.8.3.tgz - :x: **parsejson-0.0.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Kalskiman/gentelella/commit/0736072b46adcf2ceef588bb8660b4851929bc43">0736072b46adcf2ceef588bb8660b4851929bc43</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The parsejson module is vulnerable to regular expression denial of service when untrusted user input is passed into it to be parsed. <p>Publish Date: 2018-06-07 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16113>CVE-2017-16113</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in parsejson tgz cve high severity vulnerability vulnerable library parsejson tgz method that parses a json string and returns a json object library home page a href path to dependency file tmp ws ua dvxliq archiveextraction fgllen ws scm depth gentelella vendors flot flot package package json path to vulnerable library tmp ws ua dvxliq archiveextraction fgllen ws scm depth gentelella vendors flot flot package node modules parsejson package json dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x parsejson tgz vulnerable library found in head commit a href vulnerability details the parsejson module is vulnerable to regular expression denial of service when untrusted user input is passed into it to be parsed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource
0
275,545
8,576,814,687
IssuesEvent
2018-11-12 21:36:54
idaholab/raven
https://api.github.com/repos/idaholab/raven
closed
Selective Indices from CustomSampler
priority_critical task
-------- Issue Description -------- Feature request: limit samples provided from a CustomSampler using a select set of indices. ---------------- For Change Control Board: Issue Review ---------------- This review should occur before any development is performed as a response to this issue. - [x] 1. Is it tagged with a type: defect or task? - [x] 2. Is it tagged with a priority: critical, normal or minor? - [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements? - [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. - [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.) ------- For Change Control Board: Issue Closure ------- This review should occur when the issue is imminently going to be closed. - [ ] 1. If the issue is a defect, is the defect fixed? - [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.) - [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)? - [ ] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)? - [ ] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
1.0
Selective Indices from CustomSampler - -------- Issue Description -------- Feature request: limit samples provided from a CustomSampler using a select set of indices. ---------------- For Change Control Board: Issue Review ---------------- This review should occur before any development is performed as a response to this issue. - [x] 1. Is it tagged with a type: defect or task? - [x] 2. Is it tagged with a priority: critical, normal or minor? - [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements? - [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. - [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.) ------- For Change Control Board: Issue Closure ------- This review should occur when the issue is imminently going to be closed. - [ ] 1. If the issue is a defect, is the defect fixed? - [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.) - [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)? - [ ] 4. If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)? - [ ] 5. If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
non_main
selective indices from customsampler issue description feature request limit samples provided from a customsampler using a select set of indices for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or task is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest release branch if yes is there any issue tagged with release create if needed if the issue is being closed without a pull request has an explanation of why it is being closed been provided
0
239,052
18,258,215,858
IssuesEvent
2021-10-03 12:01:00
ahampriyanshu/algo_ds_101
https://api.github.com/repos/ahampriyanshu/algo_ds_101
opened
Metadata
documentation good first issue Hacktoberfest first timer Hacktoberfest2021
### Directory README.md ### Things To Remember - [ ] The content added isn't directly copied from any sources like gfg, tutorialpoint, etc. For theory, use open-source alternatives like WikiPedia. - [ ] I have used github flavoured md syntax only. - [ ] All the relative links are working. - [ ] All the absolute links are working. - [ ] I have synced-up my forked repo. - [ ] Ain't pushing from the main branch.
1.0
Metadata - ### Directory README.md ### Things To Remember - [ ] The content added isn't directly copied from any sources like gfg, tutorialpoint, etc. For theory, use open-source alternatives like WikiPedia. - [ ] I have used github flavoured md syntax only. - [ ] All the relative links are working. - [ ] All the absolute links are working. - [ ] I have synced-up my forked repo. - [ ] Ain't pushing from the main branch.
non_main
metadata directory readme md things to remember the content added isn t directly copied from any sources like gfg tutorialpoint etc for theory use open source alternatives like wikipedia i have used github flavoured md syntax only all the relative links are working all the absolute links are working i have synced up my forked repo ain t pushing from the main branch
0
3,614
14,615,342,748
IssuesEvent
2020-12-22 11:22:58
melisMirza/SWE573_project
https://api.github.com/repos/melisMirza/SWE573_project
opened
Switch (on Dockerized) Application From SqlLite to PostGreSQL
backend maintainance
Direct the DB of application from sqlLite to Postgre. Implement this depedency on the docker files.
True
Switch (on Dockerized) Application From SqlLite to PostGreSQL - Direct the DB of application from sqlLite to Postgre. Implement this depedency on the docker files.
main
switch on dockerized application from sqllite to postgresql direct the db of application from sqllite to postgre implement this depedency on the docker files
1
630
4,146,955,951
IssuesEvent
2016-06-15 03:33:03
Microsoft/DirectXMesh
https://api.github.com/repos/Microsoft/DirectXMesh
closed
Remove VS 2012 adapter code
maintainence
As part of dropping VS 2012 projects, can clean up the following code: * Remove C4005 disable for ``stdint.h`` (workaround for bug with VS 2010 + Windows 7 SDK) * Remove C4481 disable for "override is an extension" (workaround for VS 2010 bug) * Remove ``DIRECTX_STD_CALLCONV`` std::function workaround for VS 2012 * Remove ``DIRECTX_CTOR_DEFAULT`` / ``DIRECTX_CTOR_DELETE`` macros and just use =default, =delete directly (VS 2013 or later supports this) * Remove DirectXMath 3.03 adapters for 3.06 constructs (workaround for Windows 8.0 SDK) * Remove some guarded code patterns for Windows XP (i.e. functions that were added to Windows Vista) * Make consistent use of ``= {}`` to initialize memory to zero (C++11 brace init behavior fixed in VS 2013) * Remove legacy ``WCHAR`` Win32 type and use ``wchar_t``
True
Remove VS 2012 adapter code - As part of dropping VS 2012 projects, can clean up the following code: * Remove C4005 disable for ``stdint.h`` (workaround for bug with VS 2010 + Windows 7 SDK) * Remove C4481 disable for "override is an extension" (workaround for VS 2010 bug) * Remove ``DIRECTX_STD_CALLCONV`` std::function workaround for VS 2012 * Remove ``DIRECTX_CTOR_DEFAULT`` / ``DIRECTX_CTOR_DELETE`` macros and just use =default, =delete directly (VS 2013 or later supports this) * Remove DirectXMath 3.03 adapters for 3.06 constructs (workaround for Windows 8.0 SDK) * Remove some guarded code patterns for Windows XP (i.e. functions that were added to Windows Vista) * Make consistent use of ``= {}`` to initialize memory to zero (C++11 brace init behavior fixed in VS 2013) * Remove legacy ``WCHAR`` Win32 type and use ``wchar_t``
main
remove vs adapter code as part of dropping vs projects can clean up the following code remove disable for stdint h workaround for bug with vs windows sdk remove disable for override is an extension workaround for vs bug remove directx std callconv std function workaround for vs remove directx ctor default directx ctor delete macros and just use default delete directly vs or later supports this remove directxmath adapters for constructs workaround for windows sdk remove some guarded code patterns for windows xp i e functions that were added to windows vista make consistent use of to initialize memory to zero c brace init behavior fixed in vs remove legacy wchar type and use wchar t
1
24,710
17,633,212,440
IssuesEvent
2021-08-19 10:35:12
f1nal3/Juniorgram
https://api.github.com/repos/f1nal3/Juniorgram
closed
Create a Continuous Delivery pipeline.
infrastructure
1) Implement an action that takes a release branch and build it in release mode 2) Create a mechanism to deploy the release build to our host. 3) Create a tag system on git that would reflect that 4) Create a badge =)
1.0
Create a Continuous Delivery pipeline. - 1) Implement an action that takes a release branch and build it in release mode 2) Create a mechanism to deploy the release build to our host. 3) Create a tag system on git that would reflect that 4) Create a badge =)
non_main
create a continuous delivery pipeline implement an action that takes a release branch and build it in release mode create a mechanism to deploy the release build to our host create a tag system on git that would reflect that create a badge
0
3,449
2,610,062,977
IssuesEvent
2015-02-26 18:18:30
chrsmith/jsjsj122
https://api.github.com/repos/chrsmith/jsjsj122
opened
黄岩治不育去哪里最好
auto-migrated Priority-Medium Type-Defect
``` 黄岩治不育去哪里最好【台州五洲生殖医院】24小时健康咨询 热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市 椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1 18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、 112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:45
1.0
黄岩治不育去哪里最好 - ``` 黄岩治不育去哪里最好【台州五洲生殖医院】24小时健康咨询 热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市 椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1 18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、 112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:45
non_main
黄岩治不育去哪里最好 黄岩治不育去哪里最好【台州五洲生殖医院】 热线 微信号tzwzszyy 医院地址 台州市 (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
0
310,998
23,365,763,484
IssuesEvent
2022-08-10 15:14:07
aws-samples/aws-analytics-reference-architecture
https://api.github.com/repos/aws-samples/aws-analytics-reference-architecture
opened
AWS native refarch cannot be deployed in AWS accounts with Lake Formation enabled
bug documentation good first issue
Deploying the [AWS native refarch ](https://github.com/aws-samples/aws-analytics-reference-architecture/tree/main/refarch) in an account with [Lake Formation enabled](https://docs.aws.amazon.com/lake-formation/latest/dg/getting-started-setup.html#setup-change-cat-settings) fails because the CloudFormation execution role is not granted to create Glue resources in Lake Formation. In this setup, IAM permissions are not used anymore by Glue. The workaround is to grant Lake Formation permissions to the IAM role used by CDK. By default the IAM role used by CDK is common to all CDK applications deployed in an AWS account and is created when bootstrapping an account with `cdk bootstrap`. This role can be found in the default `CDKToolkit` stack in CloudFormation console (cdk-xxxxxxx-cfn-exec-role-<ACCOUNT_ID>-<REGION>). We should document this workaround in the [getting started guide](https://github.com/aws-samples/aws-analytics-reference-architecture/tree/main/refarch#getting-started). The long term solution is to use a [custom bootstrap](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html#bootstrapping-customizing) with: - A custom qualifier to scope the custom bootstrap to AWS Analytics Reference Architecture. To ensure the qualifier is passed to all the stacks, we should probably create a new Stack type (AraStack) - A custom bootstrap CloudFormation template granting Lake Formation permissions to the CDK execution via an [AWS::LakeFormation::PrincipalPermissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lakeformation-principalpermissions.html)
1.0
AWS native refarch cannot be deployed in AWS accounts with Lake Formation enabled - Deploying the [AWS native refarch ](https://github.com/aws-samples/aws-analytics-reference-architecture/tree/main/refarch) in an account with [Lake Formation enabled](https://docs.aws.amazon.com/lake-formation/latest/dg/getting-started-setup.html#setup-change-cat-settings) fails because the CloudFormation execution role is not granted to create Glue resources in Lake Formation. In this setup, IAM permissions are not used anymore by Glue. The workaround is to grant Lake Formation permissions to the IAM role used by CDK. By default the IAM role used by CDK is common to all CDK applications deployed in an AWS account and is created when bootstrapping an account with `cdk bootstrap`. This role can be found in the default `CDKToolkit` stack in CloudFormation console (cdk-xxxxxxx-cfn-exec-role-<ACCOUNT_ID>-<REGION>). We should document this workaround in the [getting started guide](https://github.com/aws-samples/aws-analytics-reference-architecture/tree/main/refarch#getting-started). The long term solution is to use a [custom bootstrap](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html#bootstrapping-customizing) with: - A custom qualifier to scope the custom bootstrap to AWS Analytics Reference Architecture. To ensure the qualifier is passed to all the stacks, we should probably create a new Stack type (AraStack) - A custom bootstrap CloudFormation template granting Lake Formation permissions to the CDK execution via an [AWS::LakeFormation::PrincipalPermissions](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lakeformation-principalpermissions.html)
non_main
aws native refarch cannot be deployed in aws accounts with lake formation enabled deploying the in an account with fails because the cloudformation execution role is not granted to create glue resources in lake formation in this setup iam permissions are not used anymore by glue the workaround is to grant lake formation permissions to the iam role used by cdk by default the iam role used by cdk is common to all cdk applications deployed in an aws account and is created when bootstrapping an account with cdk bootstrap this role can be found in the default cdktoolkit stack in cloudformation console cdk xxxxxxx cfn exec role we should document this workaround in the the long term solution is to use a with a custom qualifier to scope the custom bootstrap to aws analytics reference architecture to ensure the qualifier is passed to all the stacks we should probably create a new stack type arastack a custom bootstrap cloudformation template granting lake formation permissions to the cdk execution via an
0
3,478
13,399,518,669
IssuesEvent
2020-09-03 14:35:37
NaluKit/nalu
https://api.github.com/repos/NaluKit/nalu
closed
remove @Debug annotation
maintainance
Removing the @Debug annotation and the related code will reduce the size of the generated code. Is the debug annotation a valuable feature or can we remove it?
True
remove @Debug annotation - Removing the @Debug annotation and the related code will reduce the size of the generated code. Is the debug annotation a valuable feature or can we remove it?
main
remove debug annotation removing the debug annotation and the related code will reduce the size of the generated code is the debug annotation a valuable feature or can we remove it
1
14,962
5,028,477,518
IssuesEvent
2016-12-15 18:20:16
Codewars/codewars.com
https://api.github.com/repos/Codewars/codewars.com
closed
Impossible to republish, impossible to delete
bug Deployed to preview.codewars.com high priority
I can't republish this kata: https://www.codewars.com/kata/56dbeec613c2f63be4000be6/edit/fsharp I can't delete the F# language in this kata (I thought that could have been a solution...). " Are you sure you want to delete ?" -> "YES, I WANT TO DELETE THIS LANGUAGE" but nothing happens !
1.0
Impossible to republish, impossible to delete - I can't republish this kata: https://www.codewars.com/kata/56dbeec613c2f63be4000be6/edit/fsharp I can't delete the F# language in this kata (I thought that could have been a solution...). " Are you sure you want to delete ?" -> "YES, I WANT TO DELETE THIS LANGUAGE" but nothing happens !
non_main
impossible to republish impossible to delete i can t republish this kata i can t delete the f language in this kata i thought that could have been a solution are you sure you want to delete yes i want to delete this language but nothing happens
0
1,568
6,572,324,588
IssuesEvent
2017-09-11 01:23:27
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
lxc_container: snapshot clone container creation incorrectly starts the origin container
affects_2.1 bug_report cloud waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> lxc_container ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> None ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> I have used both Ubuntu 14.04 LTS and Ubuntu 16.04 LTS with the same results. More details in this gist: https://gist.github.com/odyssey4me/97e0edbb9e46748cdf8775b786f820b6 ##### SUMMARY <!--- Explain the problem briefly --> When using the lxc_container module to create a container (overlayfs1) based on a snapshot of another container (base1) (ie `lxc-clone --snapshot`), instead of starting 'overlayfs1' the base container starts and thus the one that's supposed to start fails. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> Using https://gist.github.com/odyssey4me/97e0edbb9e46748cdf8775b786f820b6#file-0-create-containers-yml you will notice the changes in state. Instead of overlayfs1 and overlayfs2 being in a started state (with base1 and base2 in a stopped state), the result is the opposite. The playbook below does a comparative test using the module and the CLI. <!--- Paste example playbooks or commands between quotes below --> ``` - name: Create containers via host target hosts: localhost tasks: - name: Clean up previous tests lxc_container: name: "{{ item }}" state: absent with_items: - overlayfs1 - base1 - overlayfs2 - base2 - name: Create container base1 lxc_container: name: base1 template: download state: stopped backing_store: dir template_options: --dist ubuntu --release trusty --arch amd64 - name: Check state of base1 command: lxc-info -n base1 - name: Create container overlay1 lxc_container: name: base1 clone_snapshot: yes clone_name: overlayfs1 state: started backing_store: overlayfs - name: Check state of base1 command: lxc-info -n base1 - name: Check state of overlayfs1 command: lxc-info -n overlayfs1 - name: Create container base2 command: lxc-create --name=base2 --template=download -- --dist ubuntu --release trusty --arch amd64 - name: Check state of base2 command: lxc-info -n base2 - name: Create container overlayfs2 command: lxc-clone --snapshot --backingstore overlayfs --orig base2 --new overlayfs2 - name: Start container overlayfs2 command: lxc-start --name overlayfs2 - name: Check state of base2 command: lxc-info -n base2 - name: Check state of overlayfs2 command: lxc-info -n overlayfs2 ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> The container `overlayfs1` and `overlayfs2` should be running, while the base containers should be stopped. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> The container `base1` and `overlayfs2` are running. The container `overlayfs1` tried to start, but failed because the base was running. <!--- Paste verbatim command output between quotes below --> ``` root@lxc-xenial1:~# ansible-playbook -i inventory create-containers.yml -vvvv No config file found; using defaults Loaded callback default of type stdout, v2.0 PLAYBOOK: create-containers.yml ************************************************ 1 plays in create-containers.yml PLAY [Create containers via host target] *************************************** TASK [setup] ******************************************************************* <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610711.39-163986335105665 `" && echo ansible-tmp-1468610711.39-163986335105665="` echo $HOME/.ansible/tmp/ansible-tmp-1468610711.39-163986335105665 `" ) && sleep 0' <localhost> PUT /tmp/tmpkQAhoA TO /root/.ansible/tmp/ansible-tmp-1468610711.39-163986335105665/setup <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610711.39-163986335105665/setup; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610711.39-163986335105665/" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [Clean up previous tests] ************************************************* task path: /root/create-containers.yml:5 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.07-85439717723482 `" && echo ansible-tmp-1468610712.07-85439717723482="` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.07-85439717723482 `" ) && sleep 0' <localhost> PUT /tmp/tmpDoUxEA TO /root/.ansible/tmp/ansible-tmp-1468610712.07-85439717723482/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610712.07-85439717723482/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610712.07-85439717723482/" > /dev/null 2>&1 && sleep 0' ok: [localhost] => (item=overlayfs1) => {"changed": false, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "dir", "clone_name": null, "clone_snapshot": false, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "overlayfs1", "lxc_path": null, "name": "overlayfs1", "state": "absent", "template": "ubuntu", "template_options": null, "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "item": "overlayfs1", "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "overlayfs1", "state": "absent"}} <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.25-187640896335277 `" && echo ansible-tmp-1468610712.25-187640896335277="` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.25-187640896335277 `" ) && sleep 0' <localhost> PUT /tmp/tmpb7FZXw TO /root/.ansible/tmp/ansible-tmp-1468610712.25-187640896335277/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610712.25-187640896335277/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610712.25-187640896335277/" > /dev/null 2>&1 && sleep 0' ok: [localhost] => (item=base1) => {"changed": false, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "dir", "clone_name": null, "clone_snapshot": false, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "base1", "lxc_path": null, "name": "base1", "state": "absent", "template": "ubuntu", "template_options": null, "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "item": "base1", "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "base1", "state": "absent"}} <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.45-240294886609476 `" && echo ansible-tmp-1468610712.45-240294886609476="` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.45-240294886609476 `" ) && sleep 0' <localhost> PUT /tmp/tmp7lByo5 TO /root/.ansible/tmp/ansible-tmp-1468610712.45-240294886609476/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610712.45-240294886609476/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610712.45-240294886609476/" > /dev/null 2>&1 && sleep 0' ok: [localhost] => (item=overlayfs2) => {"changed": false, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "dir", "clone_name": null, "clone_snapshot": false, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "overlayfs2", "lxc_path": null, "name": "overlayfs2", "state": "absent", "template": "ubuntu", "template_options": null, "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "item": "overlayfs2", "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "overlayfs2", "state": "absent"}} <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.63-254367127523273 `" && echo ansible-tmp-1468610712.63-254367127523273="` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.63-254367127523273 `" ) && sleep 0' <localhost> PUT /tmp/tmpkQCnTD TO /root/.ansible/tmp/ansible-tmp-1468610712.63-254367127523273/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610712.63-254367127523273/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610712.63-254367127523273/" > /dev/null 2>&1 && sleep 0' ok: [localhost] => (item=base2) => {"changed": false, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "dir", "clone_name": null, "clone_snapshot": false, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "base2", "lxc_path": null, "name": "base2", "state": "absent", "template": "ubuntu", "template_options": null, "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "item": "base2", "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "base2", "state": "absent"}} TASK [Create container base1] ************************************************** task path: /root/create-containers.yml:15 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.82-83067903917603 `" && echo ansible-tmp-1468610712.82-83067903917603="` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.82-83067903917603 `" ) && sleep 0' <localhost> PUT /tmp/tmpCKnGWI TO /root/.ansible/tmp/ansible-tmp-1468610712.82-83067903917603/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610712.82-83067903917603/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610712.82-83067903917603/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "dir", "clone_name": null, "clone_snapshot": false, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "base1", "lxc_path": null, "name": "base1", "state": "stopped", "template": "download", "template_options": "--dist ubuntu --release trusty --arch amd64", "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "base1", "state": "stopped"}} TASK [Check state of base1] **************************************************** task path: /root/create-containers.yml:23 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610722.41-225738658388606 `" && echo ansible-tmp-1468610722.41-225738658388606="` echo $HOME/.ansible/tmp/ansible-tmp-1468610722.41-225738658388606 `" ) && sleep 0' <localhost> PUT /tmp/tmpjSt_Zx TO /root/.ansible/tmp/ansible-tmp-1468610722.41-225738658388606/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610722.41-225738658388606/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610722.41-225738658388606/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "base1"], "delta": "0:00:00.004095", "end": "2016-07-15 19:25:22.562648", "invocation": {"module_args": {"_raw_params": "lxc-info -n base1", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:22.558553", "stderr": "", "stdout": "Name: base1\nState: STOPPED", "stdout_lines": ["Name: base1", "State: STOPPED"], "warnings": []} TASK [Create container overlay1] *********************************************** task path: /root/create-containers.yml:26 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610722.61-271576512150134 `" && echo ansible-tmp-1468610722.61-271576512150134="` echo $HOME/.ansible/tmp/ansible-tmp-1468610722.61-271576512150134 `" ) && sleep 0' <localhost> PUT /tmp/tmpi7oEEz TO /root/.ansible/tmp/ansible-tmp-1468610722.61-271576512150134/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610722.61-271576512150134/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610722.61-271576512150134/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "overlayfs", "clone_name": "overlayfs1", "clone_snapshot": true, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "base1", "lxc_path": null, "name": "base1", "state": "started", "template": "ubuntu", "template_options": null, "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "lxc_container": {"cloned": true, "init_pid": 30408, "interfaces": ["eth0", "lo"], "ips": [], "name": "base1", "state": "running"}} TASK [Check state of base1] **************************************************** task path: /root/create-containers.yml:34 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610724.67-62785259699531 `" && echo ansible-tmp-1468610724.67-62785259699531="` echo $HOME/.ansible/tmp/ansible-tmp-1468610724.67-62785259699531 `" ) && sleep 0' <localhost> PUT /tmp/tmpOHU5Ko TO /root/.ansible/tmp/ansible-tmp-1468610724.67-62785259699531/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610724.67-62785259699531/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610724.67-62785259699531/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "base1"], "delta": "0:00:00.007382", "end": "2016-07-15 19:25:24.830743", "invocation": {"module_args": {"_raw_params": "lxc-info -n base1", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:24.823361", "stderr": "", "stdout": "Name: base1\nState: RUNNING\nPID: 30408\nCPU use: 0.54 seconds\nBlkIO use: 128.00 KiB\nMemory use: 3.21 MiB\nKMem use: 0 bytes\nLink: veth3EIQJU\n TX bytes: 168 bytes\n RX bytes: 180 bytes\n Total bytes: 348 bytes", "stdout_lines": ["Name: base1", "State: RUNNING", "PID: 30408", "CPU use: 0.54 seconds", "BlkIO use: 128.00 KiB", "Memory use: 3.21 MiB", "KMem use: 0 bytes", "Link: veth3EIQJU", " TX bytes: 168 bytes", " RX bytes: 180 bytes", " Total bytes: 348 bytes"], "warnings": []} TASK [Check state of overlayfs1] *********************************************** task path: /root/create-containers.yml:37 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610724.87-52326584062516 `" && echo ansible-tmp-1468610724.87-52326584062516="` echo $HOME/.ansible/tmp/ansible-tmp-1468610724.87-52326584062516 `" ) && sleep 0' <localhost> PUT /tmp/tmpBWLeQY TO /root/.ansible/tmp/ansible-tmp-1468610724.87-52326584062516/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610724.87-52326584062516/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610724.87-52326584062516/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "overlayfs1"], "delta": "0:00:00.004219", "end": "2016-07-15 19:25:25.034033", "invocation": {"module_args": {"_raw_params": "lxc-info -n overlayfs1", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:25.029814", "stderr": "", "stdout": "Name: overlayfs1\nState: STOPPED", "stdout_lines": ["Name: overlayfs1", "State: STOPPED"], "warnings": []} TASK [Create container base2] ************************************************** task path: /root/create-containers.yml:40 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610725.08-277848135836061 `" && echo ansible-tmp-1468610725.08-277848135836061="` echo $HOME/.ansible/tmp/ansible-tmp-1468610725.08-277848135836061 `" ) && sleep 0' <localhost> PUT /tmp/tmpASWeTJ TO /root/.ansible/tmp/ansible-tmp-1468610725.08-277848135836061/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610725.08-277848135836061/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610725.08-277848135836061/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-create", "--name=base2", "--template=download", "--", "--dist", "ubuntu", "--release", "trusty", "--arch", "amd64"], "delta": "0:00:09.418855", "end": "2016-07-15 19:25:34.655951", "invocation": {"module_args": {"_raw_params": "lxc-create --name=base2 --template=download -- --dist ubuntu --release trusty --arch amd64", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:25.237096", "stderr": "", "stdout": "Using image from local cache\nUnpacking the rootfs\n\n---\nYou just created an Ubuntu container (release=trusty, arch=amd64, variant=default)\n\nTo enable sshd, run: apt-get install openssh-server\n\nFor security reason, container images ship without user accounts\nand without a root password.\n\nUse lxc-attach or chroot directly into the rootfs to set a root password\nor create user accounts.", "stdout_lines": ["Using image from local cache", "Unpacking the rootfs", "", "---", "You just created an Ubuntu container (release=trusty, arch=amd64, variant=default)", "", "To enable sshd, run: apt-get install openssh-server", "", "For security reason, container images ship without user accounts", "and without a root password.", "", "Use lxc-attach or chroot directly into the rootfs to set a root password", "or create user accounts."], "warnings": []} TASK [Check state of base2] **************************************************** task path: /root/create-containers.yml:43 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610734.7-39936234861778 `" && echo ansible-tmp-1468610734.7-39936234861778="` echo $HOME/.ansible/tmp/ansible-tmp-1468610734.7-39936234861778 `" ) && sleep 0' <localhost> PUT /tmp/tmpvHHdSN TO /root/.ansible/tmp/ansible-tmp-1468610734.7-39936234861778/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610734.7-39936234861778/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610734.7-39936234861778/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "base2"], "delta": "0:00:00.004381", "end": "2016-07-15 19:25:34.856930", "invocation": {"module_args": {"_raw_params": "lxc-info -n base2", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:34.852549", "stderr": "", "stdout": "Name: base2\nState: STOPPED", "stdout_lines": ["Name: base2", "State: STOPPED"], "warnings": []} TASK [Create container overlayfs2] ********************************************* task path: /root/create-containers.yml:46 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610734.89-236288139419663 `" && echo ansible-tmp-1468610734.89-236288139419663="` echo $HOME/.ansible/tmp/ansible-tmp-1468610734.89-236288139419663 `" ) && sleep 0' <localhost> PUT /tmp/tmpH1MCEQ TO /root/.ansible/tmp/ansible-tmp-1468610734.89-236288139419663/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610734.89-236288139419663/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610734.89-236288139419663/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-clone", "--snapshot", "--backingstore", "overlayfs", "--orig", "base2", "--new", "overlayfs2"], "delta": "0:00:00.035715", "end": "2016-07-15 19:25:35.079051", "invocation": {"module_args": {"_raw_params": "lxc-clone --snapshot --backingstore overlayfs --orig base2 --new overlayfs2", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:35.043336", "stderr": "lxc-clone is deprecated in favor of lxc-copy.", "stdout": "Created container overlayfs2 as snapshot of base2", "stdout_lines": ["Created container overlayfs2 as snapshot of base2"], "warnings": []} TASK [Start container overlayfs2] ********************************************** task path: /root/create-containers.yml:49 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.12-118602587099220 `" && echo ansible-tmp-1468610735.12-118602587099220="` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.12-118602587099220 `" ) && sleep 0' <localhost> PUT /tmp/tmpOn0cIP TO /root/.ansible/tmp/ansible-tmp-1468610735.12-118602587099220/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610735.12-118602587099220/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610735.12-118602587099220/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-start", "--name", "overlayfs2"], "delta": "0:00:00.113790", "end": "2016-07-15 19:25:35.381053", "invocation": {"module_args": {"_raw_params": "lxc-start --name overlayfs2", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:35.267263", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []} TASK [Check state of base2] **************************************************** task path: /root/create-containers.yml:52 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.42-115944933709368 `" && echo ansible-tmp-1468610735.42-115944933709368="` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.42-115944933709368 `" ) && sleep 0' <localhost> PUT /tmp/tmpUJItZB TO /root/.ansible/tmp/ansible-tmp-1468610735.42-115944933709368/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610735.42-115944933709368/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610735.42-115944933709368/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "base2"], "delta": "0:00:00.004206", "end": "2016-07-15 19:25:35.564378", "invocation": {"module_args": {"_raw_params": "lxc-info -n base2", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:35.560172", "stderr": "", "stdout": "Name: base2\nState: STOPPED", "stdout_lines": ["Name: base2", "State: STOPPED"], "warnings": []} TASK [Check state of overlayfs2] *********************************************** task path: /root/create-containers.yml:55 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.61-103782671782842 `" && echo ansible-tmp-1468610735.61-103782671782842="` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.61-103782671782842 `" ) && sleep 0' <localhost> PUT /tmp/tmpoDyZBZ TO /root/.ansible/tmp/ansible-tmp-1468610735.61-103782671782842/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610735.61-103782671782842/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610735.61-103782671782842/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "overlayfs2"], "delta": "0:00:00.007050", "end": "2016-07-15 19:25:35.759251", "invocation": {"module_args": {"_raw_params": "lxc-info -n overlayfs2", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:35.752201", "stderr": "", "stdout": "Name: overlayfs2\nState: RUNNING\nPID: 32287\nCPU use: 0.82 seconds\nBlkIO use: 128.00 KiB\nMemory use: 3.64 MiB\nKMem use: 0 bytes\nLink: veth5A5U1B\n TX bytes: 168 bytes\n RX bytes: 168 bytes\n Total bytes: 336 bytes", "stdout_lines": ["Name: overlayfs2", "State: RUNNING", "PID: 32287", "CPU use: 0.82 seconds", "BlkIO use: 128.00 KiB", "Memory use: 3.64 MiB", "KMem use: 0 bytes", "Link: veth5A5U1B", " TX bytes: 168 bytes", " RX bytes: 168 bytes", " Total bytes: 336 bytes"], "warnings": []} PLAY RECAP ********************************************************************* localhost : ok=13 changed=11 unreachable=0 failed=0 ```
True
lxc_container: snapshot clone container creation incorrectly starts the origin container - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> lxc_container ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.0.0 config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> None ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> I have used both Ubuntu 14.04 LTS and Ubuntu 16.04 LTS with the same results. More details in this gist: https://gist.github.com/odyssey4me/97e0edbb9e46748cdf8775b786f820b6 ##### SUMMARY <!--- Explain the problem briefly --> When using the lxc_container module to create a container (overlayfs1) based on a snapshot of another container (base1) (ie `lxc-clone --snapshot`), instead of starting 'overlayfs1' the base container starts and thus the one that's supposed to start fails. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> Using https://gist.github.com/odyssey4me/97e0edbb9e46748cdf8775b786f820b6#file-0-create-containers-yml you will notice the changes in state. Instead of overlayfs1 and overlayfs2 being in a started state (with base1 and base2 in a stopped state), the result is the opposite. The playbook below does a comparative test using the module and the CLI. <!--- Paste example playbooks or commands between quotes below --> ``` - name: Create containers via host target hosts: localhost tasks: - name: Clean up previous tests lxc_container: name: "{{ item }}" state: absent with_items: - overlayfs1 - base1 - overlayfs2 - base2 - name: Create container base1 lxc_container: name: base1 template: download state: stopped backing_store: dir template_options: --dist ubuntu --release trusty --arch amd64 - name: Check state of base1 command: lxc-info -n base1 - name: Create container overlay1 lxc_container: name: base1 clone_snapshot: yes clone_name: overlayfs1 state: started backing_store: overlayfs - name: Check state of base1 command: lxc-info -n base1 - name: Check state of overlayfs1 command: lxc-info -n overlayfs1 - name: Create container base2 command: lxc-create --name=base2 --template=download -- --dist ubuntu --release trusty --arch amd64 - name: Check state of base2 command: lxc-info -n base2 - name: Create container overlayfs2 command: lxc-clone --snapshot --backingstore overlayfs --orig base2 --new overlayfs2 - name: Start container overlayfs2 command: lxc-start --name overlayfs2 - name: Check state of base2 command: lxc-info -n base2 - name: Check state of overlayfs2 command: lxc-info -n overlayfs2 ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> The container `overlayfs1` and `overlayfs2` should be running, while the base containers should be stopped. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> The container `base1` and `overlayfs2` are running. The container `overlayfs1` tried to start, but failed because the base was running. <!--- Paste verbatim command output between quotes below --> ``` root@lxc-xenial1:~# ansible-playbook -i inventory create-containers.yml -vvvv No config file found; using defaults Loaded callback default of type stdout, v2.0 PLAYBOOK: create-containers.yml ************************************************ 1 plays in create-containers.yml PLAY [Create containers via host target] *************************************** TASK [setup] ******************************************************************* <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610711.39-163986335105665 `" && echo ansible-tmp-1468610711.39-163986335105665="` echo $HOME/.ansible/tmp/ansible-tmp-1468610711.39-163986335105665 `" ) && sleep 0' <localhost> PUT /tmp/tmpkQAhoA TO /root/.ansible/tmp/ansible-tmp-1468610711.39-163986335105665/setup <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610711.39-163986335105665/setup; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610711.39-163986335105665/" > /dev/null 2>&1 && sleep 0' ok: [localhost] TASK [Clean up previous tests] ************************************************* task path: /root/create-containers.yml:5 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.07-85439717723482 `" && echo ansible-tmp-1468610712.07-85439717723482="` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.07-85439717723482 `" ) && sleep 0' <localhost> PUT /tmp/tmpDoUxEA TO /root/.ansible/tmp/ansible-tmp-1468610712.07-85439717723482/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610712.07-85439717723482/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610712.07-85439717723482/" > /dev/null 2>&1 && sleep 0' ok: [localhost] => (item=overlayfs1) => {"changed": false, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "dir", "clone_name": null, "clone_snapshot": false, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "overlayfs1", "lxc_path": null, "name": "overlayfs1", "state": "absent", "template": "ubuntu", "template_options": null, "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "item": "overlayfs1", "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "overlayfs1", "state": "absent"}} <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.25-187640896335277 `" && echo ansible-tmp-1468610712.25-187640896335277="` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.25-187640896335277 `" ) && sleep 0' <localhost> PUT /tmp/tmpb7FZXw TO /root/.ansible/tmp/ansible-tmp-1468610712.25-187640896335277/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610712.25-187640896335277/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610712.25-187640896335277/" > /dev/null 2>&1 && sleep 0' ok: [localhost] => (item=base1) => {"changed": false, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "dir", "clone_name": null, "clone_snapshot": false, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "base1", "lxc_path": null, "name": "base1", "state": "absent", "template": "ubuntu", "template_options": null, "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "item": "base1", "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "base1", "state": "absent"}} <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.45-240294886609476 `" && echo ansible-tmp-1468610712.45-240294886609476="` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.45-240294886609476 `" ) && sleep 0' <localhost> PUT /tmp/tmp7lByo5 TO /root/.ansible/tmp/ansible-tmp-1468610712.45-240294886609476/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610712.45-240294886609476/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610712.45-240294886609476/" > /dev/null 2>&1 && sleep 0' ok: [localhost] => (item=overlayfs2) => {"changed": false, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "dir", "clone_name": null, "clone_snapshot": false, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "overlayfs2", "lxc_path": null, "name": "overlayfs2", "state": "absent", "template": "ubuntu", "template_options": null, "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "item": "overlayfs2", "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "overlayfs2", "state": "absent"}} <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.63-254367127523273 `" && echo ansible-tmp-1468610712.63-254367127523273="` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.63-254367127523273 `" ) && sleep 0' <localhost> PUT /tmp/tmpkQCnTD TO /root/.ansible/tmp/ansible-tmp-1468610712.63-254367127523273/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610712.63-254367127523273/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610712.63-254367127523273/" > /dev/null 2>&1 && sleep 0' ok: [localhost] => (item=base2) => {"changed": false, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "dir", "clone_name": null, "clone_snapshot": false, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "base2", "lxc_path": null, "name": "base2", "state": "absent", "template": "ubuntu", "template_options": null, "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "item": "base2", "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "base2", "state": "absent"}} TASK [Create container base1] ************************************************** task path: /root/create-containers.yml:15 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.82-83067903917603 `" && echo ansible-tmp-1468610712.82-83067903917603="` echo $HOME/.ansible/tmp/ansible-tmp-1468610712.82-83067903917603 `" ) && sleep 0' <localhost> PUT /tmp/tmpCKnGWI TO /root/.ansible/tmp/ansible-tmp-1468610712.82-83067903917603/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610712.82-83067903917603/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610712.82-83067903917603/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "dir", "clone_name": null, "clone_snapshot": false, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "base1", "lxc_path": null, "name": "base1", "state": "stopped", "template": "download", "template_options": "--dist ubuntu --release trusty --arch amd64", "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "lxc_container": {"init_pid": -1, "interfaces": [], "ips": [], "name": "base1", "state": "stopped"}} TASK [Check state of base1] **************************************************** task path: /root/create-containers.yml:23 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610722.41-225738658388606 `" && echo ansible-tmp-1468610722.41-225738658388606="` echo $HOME/.ansible/tmp/ansible-tmp-1468610722.41-225738658388606 `" ) && sleep 0' <localhost> PUT /tmp/tmpjSt_Zx TO /root/.ansible/tmp/ansible-tmp-1468610722.41-225738658388606/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610722.41-225738658388606/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610722.41-225738658388606/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "base1"], "delta": "0:00:00.004095", "end": "2016-07-15 19:25:22.562648", "invocation": {"module_args": {"_raw_params": "lxc-info -n base1", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:22.558553", "stderr": "", "stdout": "Name: base1\nState: STOPPED", "stdout_lines": ["Name: base1", "State: STOPPED"], "warnings": []} TASK [Create container overlay1] *********************************************** task path: /root/create-containers.yml:26 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610722.61-271576512150134 `" && echo ansible-tmp-1468610722.61-271576512150134="` echo $HOME/.ansible/tmp/ansible-tmp-1468610722.61-271576512150134 `" ) && sleep 0' <localhost> PUT /tmp/tmpi7oEEz TO /root/.ansible/tmp/ansible-tmp-1468610722.61-271576512150134/lxc_container <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610722.61-271576512150134/lxc_container; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610722.61-271576512150134/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "invocation": {"module_args": {"archive": false, "archive_compression": "gzip", "archive_path": null, "backing_store": "overlayfs", "clone_name": "overlayfs1", "clone_snapshot": true, "config": null, "container_command": null, "container_config": null, "container_log": false, "container_log_level": "INFO", "directory": null, "fs_size": "5G", "fs_type": "ext4", "lv_name": "base1", "lxc_path": null, "name": "base1", "state": "started", "template": "ubuntu", "template_options": null, "thinpool": null, "vg_name": "lxc", "zfs_root": null}, "module_name": "lxc_container"}, "lxc_container": {"cloned": true, "init_pid": 30408, "interfaces": ["eth0", "lo"], "ips": [], "name": "base1", "state": "running"}} TASK [Check state of base1] **************************************************** task path: /root/create-containers.yml:34 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610724.67-62785259699531 `" && echo ansible-tmp-1468610724.67-62785259699531="` echo $HOME/.ansible/tmp/ansible-tmp-1468610724.67-62785259699531 `" ) && sleep 0' <localhost> PUT /tmp/tmpOHU5Ko TO /root/.ansible/tmp/ansible-tmp-1468610724.67-62785259699531/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610724.67-62785259699531/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610724.67-62785259699531/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "base1"], "delta": "0:00:00.007382", "end": "2016-07-15 19:25:24.830743", "invocation": {"module_args": {"_raw_params": "lxc-info -n base1", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:24.823361", "stderr": "", "stdout": "Name: base1\nState: RUNNING\nPID: 30408\nCPU use: 0.54 seconds\nBlkIO use: 128.00 KiB\nMemory use: 3.21 MiB\nKMem use: 0 bytes\nLink: veth3EIQJU\n TX bytes: 168 bytes\n RX bytes: 180 bytes\n Total bytes: 348 bytes", "stdout_lines": ["Name: base1", "State: RUNNING", "PID: 30408", "CPU use: 0.54 seconds", "BlkIO use: 128.00 KiB", "Memory use: 3.21 MiB", "KMem use: 0 bytes", "Link: veth3EIQJU", " TX bytes: 168 bytes", " RX bytes: 180 bytes", " Total bytes: 348 bytes"], "warnings": []} TASK [Check state of overlayfs1] *********************************************** task path: /root/create-containers.yml:37 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610724.87-52326584062516 `" && echo ansible-tmp-1468610724.87-52326584062516="` echo $HOME/.ansible/tmp/ansible-tmp-1468610724.87-52326584062516 `" ) && sleep 0' <localhost> PUT /tmp/tmpBWLeQY TO /root/.ansible/tmp/ansible-tmp-1468610724.87-52326584062516/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610724.87-52326584062516/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610724.87-52326584062516/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "overlayfs1"], "delta": "0:00:00.004219", "end": "2016-07-15 19:25:25.034033", "invocation": {"module_args": {"_raw_params": "lxc-info -n overlayfs1", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:25.029814", "stderr": "", "stdout": "Name: overlayfs1\nState: STOPPED", "stdout_lines": ["Name: overlayfs1", "State: STOPPED"], "warnings": []} TASK [Create container base2] ************************************************** task path: /root/create-containers.yml:40 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610725.08-277848135836061 `" && echo ansible-tmp-1468610725.08-277848135836061="` echo $HOME/.ansible/tmp/ansible-tmp-1468610725.08-277848135836061 `" ) && sleep 0' <localhost> PUT /tmp/tmpASWeTJ TO /root/.ansible/tmp/ansible-tmp-1468610725.08-277848135836061/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610725.08-277848135836061/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610725.08-277848135836061/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-create", "--name=base2", "--template=download", "--", "--dist", "ubuntu", "--release", "trusty", "--arch", "amd64"], "delta": "0:00:09.418855", "end": "2016-07-15 19:25:34.655951", "invocation": {"module_args": {"_raw_params": "lxc-create --name=base2 --template=download -- --dist ubuntu --release trusty --arch amd64", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:25.237096", "stderr": "", "stdout": "Using image from local cache\nUnpacking the rootfs\n\n---\nYou just created an Ubuntu container (release=trusty, arch=amd64, variant=default)\n\nTo enable sshd, run: apt-get install openssh-server\n\nFor security reason, container images ship without user accounts\nand without a root password.\n\nUse lxc-attach or chroot directly into the rootfs to set a root password\nor create user accounts.", "stdout_lines": ["Using image from local cache", "Unpacking the rootfs", "", "---", "You just created an Ubuntu container (release=trusty, arch=amd64, variant=default)", "", "To enable sshd, run: apt-get install openssh-server", "", "For security reason, container images ship without user accounts", "and without a root password.", "", "Use lxc-attach or chroot directly into the rootfs to set a root password", "or create user accounts."], "warnings": []} TASK [Check state of base2] **************************************************** task path: /root/create-containers.yml:43 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610734.7-39936234861778 `" && echo ansible-tmp-1468610734.7-39936234861778="` echo $HOME/.ansible/tmp/ansible-tmp-1468610734.7-39936234861778 `" ) && sleep 0' <localhost> PUT /tmp/tmpvHHdSN TO /root/.ansible/tmp/ansible-tmp-1468610734.7-39936234861778/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610734.7-39936234861778/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610734.7-39936234861778/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "base2"], "delta": "0:00:00.004381", "end": "2016-07-15 19:25:34.856930", "invocation": {"module_args": {"_raw_params": "lxc-info -n base2", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:34.852549", "stderr": "", "stdout": "Name: base2\nState: STOPPED", "stdout_lines": ["Name: base2", "State: STOPPED"], "warnings": []} TASK [Create container overlayfs2] ********************************************* task path: /root/create-containers.yml:46 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610734.89-236288139419663 `" && echo ansible-tmp-1468610734.89-236288139419663="` echo $HOME/.ansible/tmp/ansible-tmp-1468610734.89-236288139419663 `" ) && sleep 0' <localhost> PUT /tmp/tmpH1MCEQ TO /root/.ansible/tmp/ansible-tmp-1468610734.89-236288139419663/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610734.89-236288139419663/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610734.89-236288139419663/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-clone", "--snapshot", "--backingstore", "overlayfs", "--orig", "base2", "--new", "overlayfs2"], "delta": "0:00:00.035715", "end": "2016-07-15 19:25:35.079051", "invocation": {"module_args": {"_raw_params": "lxc-clone --snapshot --backingstore overlayfs --orig base2 --new overlayfs2", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:35.043336", "stderr": "lxc-clone is deprecated in favor of lxc-copy.", "stdout": "Created container overlayfs2 as snapshot of base2", "stdout_lines": ["Created container overlayfs2 as snapshot of base2"], "warnings": []} TASK [Start container overlayfs2] ********************************************** task path: /root/create-containers.yml:49 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.12-118602587099220 `" && echo ansible-tmp-1468610735.12-118602587099220="` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.12-118602587099220 `" ) && sleep 0' <localhost> PUT /tmp/tmpOn0cIP TO /root/.ansible/tmp/ansible-tmp-1468610735.12-118602587099220/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610735.12-118602587099220/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610735.12-118602587099220/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-start", "--name", "overlayfs2"], "delta": "0:00:00.113790", "end": "2016-07-15 19:25:35.381053", "invocation": {"module_args": {"_raw_params": "lxc-start --name overlayfs2", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:35.267263", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []} TASK [Check state of base2] **************************************************** task path: /root/create-containers.yml:52 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.42-115944933709368 `" && echo ansible-tmp-1468610735.42-115944933709368="` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.42-115944933709368 `" ) && sleep 0' <localhost> PUT /tmp/tmpUJItZB TO /root/.ansible/tmp/ansible-tmp-1468610735.42-115944933709368/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610735.42-115944933709368/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610735.42-115944933709368/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "base2"], "delta": "0:00:00.004206", "end": "2016-07-15 19:25:35.564378", "invocation": {"module_args": {"_raw_params": "lxc-info -n base2", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:35.560172", "stderr": "", "stdout": "Name: base2\nState: STOPPED", "stdout_lines": ["Name: base2", "State: STOPPED"], "warnings": []} TASK [Check state of overlayfs2] *********************************************** task path: /root/create-containers.yml:55 <localhost> ESTABLISH LOCAL CONNECTION FOR USER: root <localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.61-103782671782842 `" && echo ansible-tmp-1468610735.61-103782671782842="` echo $HOME/.ansible/tmp/ansible-tmp-1468610735.61-103782671782842 `" ) && sleep 0' <localhost> PUT /tmp/tmpoDyZBZ TO /root/.ansible/tmp/ansible-tmp-1468610735.61-103782671782842/command <localhost> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1468610735.61-103782671782842/command; rm -rf "/root/.ansible/tmp/ansible-tmp-1468610735.61-103782671782842/" > /dev/null 2>&1 && sleep 0' changed: [localhost] => {"changed": true, "cmd": ["lxc-info", "-n", "overlayfs2"], "delta": "0:00:00.007050", "end": "2016-07-15 19:25:35.759251", "invocation": {"module_args": {"_raw_params": "lxc-info -n overlayfs2", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": true}, "module_name": "command"}, "rc": 0, "start": "2016-07-15 19:25:35.752201", "stderr": "", "stdout": "Name: overlayfs2\nState: RUNNING\nPID: 32287\nCPU use: 0.82 seconds\nBlkIO use: 128.00 KiB\nMemory use: 3.64 MiB\nKMem use: 0 bytes\nLink: veth5A5U1B\n TX bytes: 168 bytes\n RX bytes: 168 bytes\n Total bytes: 336 bytes", "stdout_lines": ["Name: overlayfs2", "State: RUNNING", "PID: 32287", "CPU use: 0.82 seconds", "BlkIO use: 128.00 KiB", "Memory use: 3.64 MiB", "KMem use: 0 bytes", "Link: veth5A5U1B", " TX bytes: 168 bytes", " RX bytes: 168 bytes", " Total bytes: 336 bytes"], "warnings": []} PLAY RECAP ********************************************************************* localhost : ok=13 changed=11 unreachable=0 failed=0 ```
main
lxc container snapshot clone container creation incorrectly starts the origin container issue type bug report component name lxc container ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables none os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific i have used both ubuntu lts and ubuntu lts with the same results more details in this gist summary when using the lxc container module to create a container based on a snapshot of another container ie lxc clone snapshot instead of starting the base container starts and thus the one that s supposed to start fails steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used using you will notice the changes in state instead of and being in a started state with and in a stopped state the result is the opposite the playbook below does a comparative test using the module and the cli name create containers via host target hosts localhost tasks name clean up previous tests lxc container name item state absent with items name create container lxc container name template download state stopped backing store dir template options dist ubuntu release trusty arch name check state of command lxc info n name create container lxc container name clone snapshot yes clone name state started backing store overlayfs name check state of command lxc info n name check state of command lxc info n name create container command lxc create name template download dist ubuntu release trusty arch name check state of command lxc info n name create container command lxc clone snapshot backingstore overlayfs orig new name start container command lxc start name name check state of command lxc info n name check state of command lxc info n expected results the container and should be running while the base containers should be stopped actual results the container and are running the container tried to start but failed because the base was running root lxc ansible playbook i inventory create containers yml vvvv no config file found using defaults loaded callback default of type stdout playbook create containers yml plays in create containers yml play task establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpkqahoa to root ansible tmp ansible tmp setup exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp setup rm rf root ansible tmp ansible tmp dev null sleep ok task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpdouxea to root ansible tmp ansible tmp lxc container exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp lxc container rm rf root ansible tmp ansible tmp dev null sleep ok item changed false invocation module args archive false archive compression gzip archive path null backing store dir clone name null clone snapshot false config null container command null container config null container log false container log level info directory null fs size fs type lv name lxc path null name state absent template ubuntu template options null thinpool null vg name lxc zfs root null module name lxc container item lxc container init pid interfaces ips name state absent exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp lxc container exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp lxc container rm rf root ansible tmp ansible tmp dev null sleep ok item changed false invocation module args archive false archive compression gzip archive path null backing store dir clone name null clone snapshot false config null container command null container config null container log false container log level info directory null fs size fs type lv name lxc path null name state absent template ubuntu template options null thinpool null vg name lxc zfs root null module name lxc container item lxc container init pid interfaces ips name state absent exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp lxc container exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp lxc container rm rf root ansible tmp ansible tmp dev null sleep ok item changed false invocation module args archive false archive compression gzip archive path null backing store dir clone name null clone snapshot false config null container command null container config null container log false container log level info directory null fs size fs type lv name lxc path null name state absent template ubuntu template options null thinpool null vg name lxc zfs root null module name lxc container item lxc container init pid interfaces ips name state absent exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpkqcntd to root ansible tmp ansible tmp lxc container exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp lxc container rm rf root ansible tmp ansible tmp dev null sleep ok item changed false invocation module args archive false archive compression gzip archive path null backing store dir clone name null clone snapshot false config null container command null container config null container log false container log level info directory null fs size fs type lv name lxc path null name state absent template ubuntu template options null thinpool null vg name lxc zfs root null module name lxc container item lxc container init pid interfaces ips name state absent task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpckngwi to root ansible tmp ansible tmp lxc container exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp lxc container rm rf root ansible tmp ansible tmp dev null sleep changed changed true invocation module args archive false archive compression gzip archive path null backing store dir clone name null clone snapshot false config null container command null container config null container log false container log level info directory null fs size fs type lv name lxc path null name state stopped template download template options dist ubuntu release trusty arch thinpool null vg name lxc zfs root null module name lxc container lxc container init pid interfaces ips name state stopped task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpjst zx to root ansible tmp ansible tmp command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp command rm rf root ansible tmp ansible tmp dev null sleep changed changed true cmd delta end invocation module args raw params lxc info n uses shell false chdir null creates null executable null removes null warn true module name command rc start stderr stdout name nstate stopped stdout lines warnings task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp lxc container exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp lxc container rm rf root ansible tmp ansible tmp dev null sleep changed changed true invocation module args archive false archive compression gzip archive path null backing store overlayfs clone name clone snapshot true config null container command null container config null container log false container log level info directory null fs size fs type lv name lxc path null name state started template ubuntu template options null thinpool null vg name lxc zfs root null module name lxc container lxc container cloned true init pid interfaces ips name state running task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp command rm rf root ansible tmp ansible tmp dev null sleep changed changed true cmd delta end invocation module args raw params lxc info n uses shell false chdir null creates null executable null removes null warn true module name command rc start stderr stdout name nstate running npid ncpu use seconds nblkio use kib nmemory use mib nkmem use bytes nlink n tx bytes bytes n rx bytes bytes n total bytes bytes stdout lines warnings task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpbwleqy to root ansible tmp ansible tmp command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp command rm rf root ansible tmp ansible tmp dev null sleep changed changed true cmd delta end invocation module args raw params lxc info n uses shell false chdir null creates null executable null removes null warn true module name command rc start stderr stdout name nstate stopped stdout lines warnings task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpaswetj to root ansible tmp ansible tmp command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp command rm rf root ansible tmp ansible tmp dev null sleep changed changed true cmd delta end invocation module args raw params lxc create name template download dist ubuntu release trusty arch uses shell false chdir null creates null executable null removes null warn true module name command rc start stderr stdout using image from local cache nunpacking the rootfs n n nyou just created an ubuntu container release trusty arch variant default n nto enable sshd run apt get install openssh server n nfor security reason container images ship without user accounts nand without a root password n nuse lxc attach or chroot directly into the rootfs to set a root password nor create user accounts stdout lines warnings task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpvhhdsn to root ansible tmp ansible tmp command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp command rm rf root ansible tmp ansible tmp dev null sleep changed changed true cmd delta end invocation module args raw params lxc info n uses shell false chdir null creates null executable null removes null warn true module name command rc start stderr stdout name nstate stopped stdout lines warnings task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp command rm rf root ansible tmp ansible tmp dev null sleep changed changed true cmd delta end invocation module args raw params lxc clone snapshot backingstore overlayfs orig new uses shell false chdir null creates null executable null removes null warn true module name command rc start stderr lxc clone is deprecated in favor of lxc copy stdout created container as snapshot of stdout lines warnings task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp command rm rf root ansible tmp ansible tmp dev null sleep changed changed true cmd delta end invocation module args raw params lxc start name uses shell false chdir null creates null executable null removes null warn true module name command rc start stderr stdout stdout lines warnings task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpujitzb to root ansible tmp ansible tmp command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp command rm rf root ansible tmp ansible tmp dev null sleep changed changed true cmd delta end invocation module args raw params lxc info n uses shell false chdir null creates null executable null removes null warn true module name command rc start stderr stdout name nstate stopped stdout lines warnings task task path root create containers yml establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpodyzbz to root ansible tmp ansible tmp command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python root ansible tmp ansible tmp command rm rf root ansible tmp ansible tmp dev null sleep changed changed true cmd delta end invocation module args raw params lxc info n uses shell false chdir null creates null executable null removes null warn true module name command rc start stderr stdout name nstate running npid ncpu use seconds nblkio use kib nmemory use mib nkmem use bytes nlink n tx bytes bytes n rx bytes bytes n total bytes bytes stdout lines warnings play recap localhost ok changed unreachable failed
1
961
4,704,706,600
IssuesEvent
2016-10-13 12:28:03
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Bad documentation example in gce module
affects_2.2 cloud docs_report gce waiting_on_maintainer
This is the example I'm following, but I end up with different results. You can see below most of the playbook works out fine except for the part when Ansible comes in to do it's magic. For some reason the host cannot be found. ┌─┤[james@xps13-nocentre-net:~/workspace/github/ansible][3.18.8-201.fc21.x86_64 10:51:10]├────── ── ─ └[feature/elk] $ ansible-playbook playbooks/plays/gce/elk.yml -i inventory/hosts_production ``` PLAY [Google Cloud Procurement] *********************************************** TASK: [gce] ******************************************************************* ok: [localhost -> 127.0.0.1] TASK: [set_fact public_ip={{gce.instance_data[0].public_ip}}] ***************** ok: [localhost] TASK: [set_fact private_ip={{gce.instance_data[0].private_ip}}] *************** ok: [localhost] TASK: [set_fact ansible_ssh_host={{public_ip}}] ******************************* ok: [localhost] TASK: [debug msg="{{gce}}"] *************************************************** ok: [localhost] => { "msg": "{u'instance_data': [{u'status': u'RUNNING', u'name': u'elk-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'tags': [], u'image': None, u'disks': [u'elk-stage-uscentral1a-kernelfire-com', u'elk-data-stage-uscentral1a-kernelfire-com'], u'public_ip': u'130.211.122.65', u'private_ip': u'10.240.178.66', u'machine_type': u'n1-standard-4', u'metadata': {}, u'network': u'default'}], u'name': u'elk-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'changed': False, u'state': u'present', 'invocation': {'module_name': u'gce', 'module_args': ''}}" } TASK: [Wait for SSH to come up] *********************************************** ok: [localhost -> 127.0.0.1] => (item={u'status': u'RUNNING', u'network': u'default', u'zone': u'us-central1-a', u'tags': [], u'image': None, u'disks': [u'elk-stage-uscentral1a-kernelfire-com', u'elk-data-stage-uscentral1a-kernelfire-com'], u'public_ip': u'130.211.122.65', u'private_ip': u'10.240.178.66', u'machine_type': u'n1-standard-4', u'metadata': {}, u'name': u'elk-stage-uscentral1a-kernelfire-com'}) TASK: [gce_pd] **************************************************************** ok: [localhost -> 127.0.0.1] TASK: [set_fact gce_pd_name={{gce_pd.name}}] ********************************** ok: [localhost] TASK: [debug msg="{{gce_pd}}"] ************************************************ ok: [localhost] => { "msg": "{u'size_gb': 100, u'name': u'elk-data-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'changed': False, u'state': u'present', u'attached_to_instance': u'elk-stage-uscentral1a-kernelfire-com', 'invocation': {'module_name': u'gce_pd', 'module_args': ''}, u'disk_type': u'pd-ssd', u'attached_mode': u'READ_WRITE'}" } TASK: [debug msg="{{gce_pd_name}}"] ******************************************* ok: [localhost] => { "msg": "elk-data-stage-uscentral1a-kernelfire-com" } PLAY [Ansible Provisioning] *************************************************** skipping: no hosts matched PLAY RECAP ******************************************************************** localhost : ok=10 changed=0 unreachable=0 failed=0 ``` ============= PLAYBOOK ============== ``` - name: Google Cloud Procurement hosts: localhost connection: local gather_facts: false tasks: - local_action: module: gce image: centos-6 machine_type: n1-standard-4 name: elk-stage-uscentral1a-kernelfire-com persistent_boot_disk: true project_id: cf-stage zone: us-central1-a register: gce - set_fact: public_ip={{gce.instance_data[0].public_ip}} - set_fact: private_ip={{gce.instance_data[0].private_ip}} - set_fact: ansible_ssh_host={{public_ip}} - debug: msg="{{gce}}" - name: Wait for SSH to come up local_action: wait_for host={{item.public_ip}} port=22 delay=10 timeout=60 state=started with_items: "{{gce.instance_data}}" - local_action: module: gce_pd disk_type: pd-ssd instance_name: "{{gce.instance_data[0].name}}" mode: READ_WRITE name: elk-data-stage-uscentral1a-kernelfire-com project_id: cf-stage size_gb: 100 zone: us-central1-a register: gce_pd - set_fact: gce_pd_name={{gce_pd.name}} - debug: msg="{{gce_pd}}" - debug: msg="{{gce_pd_name}}" - name: Ansible Provisioning hosts: launched sudo: true roles: - { role: elasticsearch, tags: [ 'extended', 'elasticsearch' ] } - { role: redis, tags: [ 'extended', 'redis' ] } - { role: logstash, tags: [ 'extended', 'logstash' ] } - { role: kibana, tags: [ 'extended', 'kibana' ] } ``` Any ideas?
True
Bad documentation example in gce module - This is the example I'm following, but I end up with different results. You can see below most of the playbook works out fine except for the part when Ansible comes in to do it's magic. For some reason the host cannot be found. ┌─┤[james@xps13-nocentre-net:~/workspace/github/ansible][3.18.8-201.fc21.x86_64 10:51:10]├────── ── ─ └[feature/elk] $ ansible-playbook playbooks/plays/gce/elk.yml -i inventory/hosts_production ``` PLAY [Google Cloud Procurement] *********************************************** TASK: [gce] ******************************************************************* ok: [localhost -> 127.0.0.1] TASK: [set_fact public_ip={{gce.instance_data[0].public_ip}}] ***************** ok: [localhost] TASK: [set_fact private_ip={{gce.instance_data[0].private_ip}}] *************** ok: [localhost] TASK: [set_fact ansible_ssh_host={{public_ip}}] ******************************* ok: [localhost] TASK: [debug msg="{{gce}}"] *************************************************** ok: [localhost] => { "msg": "{u'instance_data': [{u'status': u'RUNNING', u'name': u'elk-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'tags': [], u'image': None, u'disks': [u'elk-stage-uscentral1a-kernelfire-com', u'elk-data-stage-uscentral1a-kernelfire-com'], u'public_ip': u'130.211.122.65', u'private_ip': u'10.240.178.66', u'machine_type': u'n1-standard-4', u'metadata': {}, u'network': u'default'}], u'name': u'elk-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'changed': False, u'state': u'present', 'invocation': {'module_name': u'gce', 'module_args': ''}}" } TASK: [Wait for SSH to come up] *********************************************** ok: [localhost -> 127.0.0.1] => (item={u'status': u'RUNNING', u'network': u'default', u'zone': u'us-central1-a', u'tags': [], u'image': None, u'disks': [u'elk-stage-uscentral1a-kernelfire-com', u'elk-data-stage-uscentral1a-kernelfire-com'], u'public_ip': u'130.211.122.65', u'private_ip': u'10.240.178.66', u'machine_type': u'n1-standard-4', u'metadata': {}, u'name': u'elk-stage-uscentral1a-kernelfire-com'}) TASK: [gce_pd] **************************************************************** ok: [localhost -> 127.0.0.1] TASK: [set_fact gce_pd_name={{gce_pd.name}}] ********************************** ok: [localhost] TASK: [debug msg="{{gce_pd}}"] ************************************************ ok: [localhost] => { "msg": "{u'size_gb': 100, u'name': u'elk-data-stage-uscentral1a-kernelfire-com', u'zone': u'us-central1-a', u'changed': False, u'state': u'present', u'attached_to_instance': u'elk-stage-uscentral1a-kernelfire-com', 'invocation': {'module_name': u'gce_pd', 'module_args': ''}, u'disk_type': u'pd-ssd', u'attached_mode': u'READ_WRITE'}" } TASK: [debug msg="{{gce_pd_name}}"] ******************************************* ok: [localhost] => { "msg": "elk-data-stage-uscentral1a-kernelfire-com" } PLAY [Ansible Provisioning] *************************************************** skipping: no hosts matched PLAY RECAP ******************************************************************** localhost : ok=10 changed=0 unreachable=0 failed=0 ``` ============= PLAYBOOK ============== ``` - name: Google Cloud Procurement hosts: localhost connection: local gather_facts: false tasks: - local_action: module: gce image: centos-6 machine_type: n1-standard-4 name: elk-stage-uscentral1a-kernelfire-com persistent_boot_disk: true project_id: cf-stage zone: us-central1-a register: gce - set_fact: public_ip={{gce.instance_data[0].public_ip}} - set_fact: private_ip={{gce.instance_data[0].private_ip}} - set_fact: ansible_ssh_host={{public_ip}} - debug: msg="{{gce}}" - name: Wait for SSH to come up local_action: wait_for host={{item.public_ip}} port=22 delay=10 timeout=60 state=started with_items: "{{gce.instance_data}}" - local_action: module: gce_pd disk_type: pd-ssd instance_name: "{{gce.instance_data[0].name}}" mode: READ_WRITE name: elk-data-stage-uscentral1a-kernelfire-com project_id: cf-stage size_gb: 100 zone: us-central1-a register: gce_pd - set_fact: gce_pd_name={{gce_pd.name}} - debug: msg="{{gce_pd}}" - debug: msg="{{gce_pd_name}}" - name: Ansible Provisioning hosts: launched sudo: true roles: - { role: elasticsearch, tags: [ 'extended', 'elasticsearch' ] } - { role: redis, tags: [ 'extended', 'redis' ] } - { role: logstash, tags: [ 'extended', 'logstash' ] } - { role: kibana, tags: [ 'extended', 'kibana' ] } ``` Any ideas?
main
bad documentation example in gce module this is the example i m following but i end up with different results you can see below most of the playbook works out fine except for the part when ansible comes in to do it s magic for some reason the host cannot be found ┌─┤ ├────── ── ─ └ ansible playbook playbooks plays gce elk yml i inventory hosts production play task ok task public ip ok task private ip ok task ok task ok msg u instance data u image none u disks u public ip u u private ip u u machine type u standard u metadata u network u default u name u elk stage kernelfire com u zone u us a u changed false u state u present invocation module name u gce module args task ok item u status u running u network u default u zone u us a u tags u image none u disks u public ip u u private ip u u machine type u standard u metadata u name u elk stage kernelfire com task ok task ok task ok msg u size gb u name u elk data stage kernelfire com u zone u us a u changed false u state u present u attached to instance u elk stage kernelfire com invocation module name u gce pd module args u disk type u pd ssd u attached mode u read write task ok msg elk data stage kernelfire com play skipping no hosts matched play recap localhost ok changed unreachable failed playbook name google cloud procurement hosts localhost connection local gather facts false tasks local action module gce image centos machine type standard name elk stage kernelfire com persistent boot disk true project id cf stage zone us a register gce set fact public ip gce instance data public ip set fact private ip gce instance data private ip set fact ansible ssh host public ip debug msg gce name wait for ssh to come up local action wait for host item public ip port delay timeout state started with items gce instance data local action module gce pd disk type pd ssd instance name gce instance data name mode read write name elk data stage kernelfire com project id cf stage size gb zone us a register gce pd set fact gce pd name gce pd name debug msg gce pd debug msg gce pd name name ansible provisioning hosts launched sudo true roles role elasticsearch tags role redis tags role logstash tags role kibana tags any ideas
1
2,246
7,918,501,823
IssuesEvent
2018-07-04 13:31:13
react-navigation/react-navigation
https://api.github.com/repos/react-navigation/react-navigation
closed
Incorrect goBack() with BottomTabNavigator
needs response from maintainer
### Current Behavior Given the navigation structure: - Stack Navigator - Tab Navigator - Main Tab (initial route) - Second Tab - Stand Alone Navigator (Stack) - Stand Alone Route Given the events: 1. navigate to second tab 2. navigate to stand alone route (not a direct sibling inside Tab Navigator) 3. navigate back `goBack(null)` This is navigating back to the Tab Navigator (as expected), but using the Main Tab (initial route), not the last selected tab (Second Tab). ### Expected Behavior - Second tab should be selected This was the case with 2.5.4 (commit c7fff52408bc5cd4d88a636d1053564289e150c1 broke it). ### How to reproduce https://snack.expo.io/BywyJGdMX ```js import React from 'react'; import { View, Text, Button } from 'react-native'; import { createStackNavigator, createBottomTabNavigator, } from 'react-navigation'; const B = ({ children }) => ( <Text style={{ fontWeight: 'bold' }}>{children}</Text> ); const MainTab = ({ navigation }) => ( <View> <Text> <B>MainTab</B>: go to SecondTab </Text> <Button title="navigate" onPress={() => navigation.navigate({ routeName: 'SecondTab' })} /> </View> ); MainTab.navigationOptions = { title: 'MainTab' }; const SecondTab = ({ navigation }) => ( <View> <Text> <B>SecondTab</B>: go to StandAlone </Text> <Button title="navigate" onPress={() => navigation.navigate({ routeName: 'StandAlone' })} /> </View> ); SecondTab.navigationOptions = { title: 'SecondTab' }; const StandAlone = ({ navigation }) => ( <View> <Text> <B>StandAlone</B>: go back. Should go to <B>SecondTab</B> (last selected/active) </Text> <Button title="Back!" onPress={() => navigation.goBack(null)} /> </View> ); StandAlone.navigationOptions = { title: 'StandAlone' }; const StandAloneNavigator = createStackNavigator({ StandAlone, }); const TabNavigator = createBottomTabNavigator({ MainTab, SecondTab, }); TabNavigator.navigationOptions = { title: 'TabNavigator' }; export default createStackNavigator( { TabNavigator, StandAloneNavigator, }, { initialRouteName: 'TabNavigator', } ); ``` ### Your Environment | software | version | ---------------- | ------- | react-navigation | 2.5.5 | react-native | 0.55.4 | node | 9.2.0 | yarn | 1.7.0
True
Incorrect goBack() with BottomTabNavigator - ### Current Behavior Given the navigation structure: - Stack Navigator - Tab Navigator - Main Tab (initial route) - Second Tab - Stand Alone Navigator (Stack) - Stand Alone Route Given the events: 1. navigate to second tab 2. navigate to stand alone route (not a direct sibling inside Tab Navigator) 3. navigate back `goBack(null)` This is navigating back to the Tab Navigator (as expected), but using the Main Tab (initial route), not the last selected tab (Second Tab). ### Expected Behavior - Second tab should be selected This was the case with 2.5.4 (commit c7fff52408bc5cd4d88a636d1053564289e150c1 broke it). ### How to reproduce https://snack.expo.io/BywyJGdMX ```js import React from 'react'; import { View, Text, Button } from 'react-native'; import { createStackNavigator, createBottomTabNavigator, } from 'react-navigation'; const B = ({ children }) => ( <Text style={{ fontWeight: 'bold' }}>{children}</Text> ); const MainTab = ({ navigation }) => ( <View> <Text> <B>MainTab</B>: go to SecondTab </Text> <Button title="navigate" onPress={() => navigation.navigate({ routeName: 'SecondTab' })} /> </View> ); MainTab.navigationOptions = { title: 'MainTab' }; const SecondTab = ({ navigation }) => ( <View> <Text> <B>SecondTab</B>: go to StandAlone </Text> <Button title="navigate" onPress={() => navigation.navigate({ routeName: 'StandAlone' })} /> </View> ); SecondTab.navigationOptions = { title: 'SecondTab' }; const StandAlone = ({ navigation }) => ( <View> <Text> <B>StandAlone</B>: go back. Should go to <B>SecondTab</B> (last selected/active) </Text> <Button title="Back!" onPress={() => navigation.goBack(null)} /> </View> ); StandAlone.navigationOptions = { title: 'StandAlone' }; const StandAloneNavigator = createStackNavigator({ StandAlone, }); const TabNavigator = createBottomTabNavigator({ MainTab, SecondTab, }); TabNavigator.navigationOptions = { title: 'TabNavigator' }; export default createStackNavigator( { TabNavigator, StandAloneNavigator, }, { initialRouteName: 'TabNavigator', } ); ``` ### Your Environment | software | version | ---------------- | ------- | react-navigation | 2.5.5 | react-native | 0.55.4 | node | 9.2.0 | yarn | 1.7.0
main
incorrect goback with bottomtabnavigator current behavior given the navigation structure stack navigator tab navigator main tab initial route second tab stand alone navigator stack stand alone route given the events navigate to second tab navigate to stand alone route not a direct sibling inside tab navigator navigate back goback null this is navigating back to the tab navigator as expected but using the main tab initial route not the last selected tab second tab expected behavior second tab should be selected this was the case with commit broke it how to reproduce js import react from react import view text button from react native import createstacknavigator createbottomtabnavigator from react navigation const b children children const maintab navigation maintab go to secondtab button title navigate onpress navigation navigate routename secondtab maintab navigationoptions title maintab const secondtab navigation secondtab go to standalone button title navigate onpress navigation navigate routename standalone secondtab navigationoptions title secondtab const standalone navigation standalone go back should go to secondtab last selected active navigation goback null standalone navigationoptions title standalone const standalonenavigator createstacknavigator standalone const tabnavigator createbottomtabnavigator maintab secondtab tabnavigator navigationoptions title tabnavigator export default createstacknavigator tabnavigator standalonenavigator initialroutename tabnavigator your environment software version react navigation react native node yarn
1
673,387
22,960,486,174
IssuesEvent
2022-07-19 14:59:45
Elice-SW-2-Team14/Animal-Hospital
https://api.github.com/repos/Elice-SW-2-Team14/Animal-Hospital
closed
[FE] 디테일 페이지 병원 정보 API 받아오기
🔨 Feature ❗️high-priority 🖥 Frontend
## 🔨 기능 설명 디테일 페이지 병원 정보 API 받아오기 ## 📑 완료 조건 오류 없이 완료되었을때! ## 💭 관련 백로그 [[FE] 상세 페이지]-[메인 컴포넌트]-[병원 정보] ## 💭 예상 작업 시간 3h
1.0
[FE] 디테일 페이지 병원 정보 API 받아오기 - ## 🔨 기능 설명 디테일 페이지 병원 정보 API 받아오기 ## 📑 완료 조건 오류 없이 완료되었을때! ## 💭 관련 백로그 [[FE] 상세 페이지]-[메인 컴포넌트]-[병원 정보] ## 💭 예상 작업 시간 3h
non_main
디테일 페이지 병원 정보 api 받아오기 🔨 기능 설명 디테일 페이지 병원 정보 api 받아오기 📑 완료 조건 오류 없이 완료되었을때 💭 관련 백로그 상세 페이지 💭 예상 작업 시간
0
27,009
11,423,910,595
IssuesEvent
2020-02-03 16:43:29
whitesource-yossi/vexflow
https://api.github.com/repos/whitesource-yossi/vexflow
opened
CVE-2015-8858 (High) detected in uglify-js-2.3.6.tgz
security vulnerability
## CVE-2015-8858 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.3.6.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.3.6.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.3.6.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/vexflow/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/vexflow/node_modules/handlebars/node_modules/uglify-js/package.json</p> <p> Dependency Hierarchy: - qunit-0.9.3.tgz (Root Library) - istanbul-0.2.5.tgz - handlebars-1.3.0.tgz - :x: **uglify-js-2.3.6.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/whitesource-yossi/vexflow/commit/0cae6b1d26651f52a75d41d6249c70186bc90881">0cae6b1d26651f52a75d41d6249c70186bc90881</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a "regular expression denial of service (ReDoS)." <p>Publish Date: 2017-01-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8858>CVE-2015-8858</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p> <p>Release Date: 2018-12-15</p> <p>Fix Resolution: v2.6.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"2.3.6","isTransitiveDependency":true,"dependencyTree":"qunit:0.9.3;istanbul:0.2.5;handlebars:1.3.0;uglify-js:2.3.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v2.6.0"}],"vulnerabilityIdentifier":"CVE-2015-8858","vulnerabilityDetails":"The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a \"regular expression denial of service (ReDoS).\"","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8858","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2015-8858 (High) detected in uglify-js-2.3.6.tgz - ## CVE-2015-8858 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.3.6.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.3.6.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.3.6.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/vexflow/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/vexflow/node_modules/handlebars/node_modules/uglify-js/package.json</p> <p> Dependency Hierarchy: - qunit-0.9.3.tgz (Root Library) - istanbul-0.2.5.tgz - handlebars-1.3.0.tgz - :x: **uglify-js-2.3.6.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/whitesource-yossi/vexflow/commit/0cae6b1d26651f52a75d41d6249c70186bc90881">0cae6b1d26651f52a75d41d6249c70186bc90881</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a "regular expression denial of service (ReDoS)." <p>Publish Date: 2017-01-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8858>CVE-2015-8858</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8858</a></p> <p>Release Date: 2018-12-15</p> <p>Fix Resolution: v2.6.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"uglify-js","packageVersion":"2.3.6","isTransitiveDependency":true,"dependencyTree":"qunit:0.9.3;istanbul:0.2.5;handlebars:1.3.0;uglify-js:2.3.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v2.6.0"}],"vulnerabilityIdentifier":"CVE-2015-8858","vulnerabilityDetails":"The uglify-js package before 2.6.0 for Node.js allows attackers to cause a denial of service (CPU consumption) via crafted input in a parse call, aka a \"regular expression denial of service (ReDoS).\"","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8858","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_main
cve high detected in uglify js tgz cve high severity vulnerability vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file tmp ws scm vexflow package json path to vulnerable library tmp ws scm vexflow node modules handlebars node modules uglify js package json dependency hierarchy qunit tgz root library istanbul tgz handlebars tgz x uglify js tgz vulnerable library found in head commit a href vulnerability details the uglify js package before for node js allows attackers to cause a denial of service cpu consumption via crafted input in a parse call aka a regular expression denial of service redos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails the uglify js package before for node js allows attackers to cause a denial of service cpu consumption via crafted input in a parse call aka a regular expression denial of service redos vulnerabilityurl
0
4,493
23,393,964,169
IssuesEvent
2022-08-11 20:50:51
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
opened
Customize page favicons
type: enhancement work: frontend status: blocked restricted: new maintainers
## Current behavior - All pages have the same favicon ## Desired behavior - Our [Navigation Specs](https://github.com/centerofci/mathesar-wiki/blob/master/design/specs/navigation.md#navigation-via-page-url) require that we use distinct favicons for pages that represent different types of entities. ## Implementation 1. Review specs to see which entities require custom favicons. 1. Look in `src/icons.ts` to find the correct FontAwesome icons for those entities. 1. Generate custom favicons based on the FontAwesome icons. I like to follow [this guide](https://evilmartians.com/chronicles/how-to-favicon-in-2021-six-files-that-fit-most-needs) for favicons, but we could probably get away with just using the SVG. Which browser's don't support SVG favicons these days? It might be good to look. 1. Use `<svelte:head>` to modify the HTML `<head>` element from within components like `TablePage.svelte` ## Status - Blocked by #1231
True
Customize page favicons - ## Current behavior - All pages have the same favicon ## Desired behavior - Our [Navigation Specs](https://github.com/centerofci/mathesar-wiki/blob/master/design/specs/navigation.md#navigation-via-page-url) require that we use distinct favicons for pages that represent different types of entities. ## Implementation 1. Review specs to see which entities require custom favicons. 1. Look in `src/icons.ts` to find the correct FontAwesome icons for those entities. 1. Generate custom favicons based on the FontAwesome icons. I like to follow [this guide](https://evilmartians.com/chronicles/how-to-favicon-in-2021-six-files-that-fit-most-needs) for favicons, but we could probably get away with just using the SVG. Which browser's don't support SVG favicons these days? It might be good to look. 1. Use `<svelte:head>` to modify the HTML `<head>` element from within components like `TablePage.svelte` ## Status - Blocked by #1231
main
customize page favicons current behavior all pages have the same favicon desired behavior our require that we use distinct favicons for pages that represent different types of entities implementation review specs to see which entities require custom favicons look in src icons ts to find the correct fontawesome icons for those entities generate custom favicons based on the fontawesome icons i like to follow for favicons but we could probably get away with just using the svg which browser s don t support svg favicons these days it might be good to look use to modify the html element from within components like tablepage svelte status blocked by
1
15,591
8,969,357,769
IssuesEvent
2019-01-29 10:35:39
rust-lang/rust
https://api.github.com/repos/rust-lang/rust
opened
Tracking Issue for making incremental compilation the default for Release Builds
A-incr-comp C-tracking-issue I-compiletime T-cargo T-compiler T-core WG-compiler-performance
Since incremental compilation supports being used in conjunction with ThinLTO the runtime performance of incrementally built artifacts is (presumably) roughly on par with non-incrementally built code. At the same time, building things incrementally often is significantly faster (([1.4-5x](https://github.com/rust-lang/rust/pull/56678#issuecomment-446606215) according to perf.rlo). As a consequence it might be a good idea to make Cargo default to incremental compilation for release builds. Possible caveats that need to be resolved: - [ ] The initial build is slightly slower with incremental compilation, usually around 10%. We need to decide if this is a worthwhile tradeoff. For `debug` and `check` builds everybody seems to be fine with this already. - [ ] Some crates, like `style-servo`, are always slower to compile with incr. comp., even if there is just a small change. In the case of `style-servo` that is 62 seconds versus 64-69 seconds on perf.rlo. It is unlikely that this would improve before we make incr. comp. the default. We need to decide if this is a justifiable price to pay for improvements in other projects. - [ ] Even if incremental compilation becomes the default, one can still always opt out of it via the `CARGO_INCREMENTAL` flag or a local Cargo config. However, this might not be common knowledge, the same as it isn't common knowledge that one can improve runtime performance by forcing the compiler to use just one codegen unit. - [ ] It still needs to be verified that runtime performance of compiled artifacts does not suffer too much from switching to incremental compilation (see below). ## Data on runtime performance of incrementally compiled release artifacts Apart from anectodal evidence that runtime performance is "roughly the same" there have been two attempts to measure this in a more reliable way: 1. PR #56678 did an experiment where we compiled the compiler itself incrementally and then tested how the compiler's runtime performance was affected by this. The results are twofold: 1. In general performance drops by **1-2%** ([compare results](https://perf.rust-lang.org/compare.html?start=3a3121337122637fa11f0e5d42aec67551e8c125&end=26f96e5eea2d6d088fd20ebc14dc90bdf123e4a1) for `clean` builds) 2. For two of the small test cases (`helloworld`, `unify-linearly`) performance drops by 30%. It is known that these test cases are very sensitive to LLVM making the right inlining decisions, which we already saw when switching from single-CGU to non-incremental ThinLTO. This is indicative that microbenchmarks may see performance drops unless the author of the benchmark takes care of marking bottleneck functions with `#[inline]`. 2. For a limited period of time we made incremental compilation the default in Cargo (https://github.com/rust-lang/cargo/pull/6564) in order to see how this affected measurements on [lolbench.rs](https://lolbench.rs). It is not yet clear if the experiment succeeded and how much useful data it collected since we had to cut it short because of a regression (#57947). The initial data looks promising: only a handful of the ~600 benchmarks showed performance losses (see https://lolbench.rs/#nightly-2019-01-27). But we need further investigation on how reliable the results are. We might also want to re-run the experiment since the regression can easily be avoided. One more experiment we should do is compiling Firefox because it is a large Rust codebase with an excellent benchmarking infrastructure (cc @nnethercote). cc @rust-lang/core @rust-lang/cargo @rust-lang/compiler
True
Tracking Issue for making incremental compilation the default for Release Builds - Since incremental compilation supports being used in conjunction with ThinLTO the runtime performance of incrementally built artifacts is (presumably) roughly on par with non-incrementally built code. At the same time, building things incrementally often is significantly faster (([1.4-5x](https://github.com/rust-lang/rust/pull/56678#issuecomment-446606215) according to perf.rlo). As a consequence it might be a good idea to make Cargo default to incremental compilation for release builds. Possible caveats that need to be resolved: - [ ] The initial build is slightly slower with incremental compilation, usually around 10%. We need to decide if this is a worthwhile tradeoff. For `debug` and `check` builds everybody seems to be fine with this already. - [ ] Some crates, like `style-servo`, are always slower to compile with incr. comp., even if there is just a small change. In the case of `style-servo` that is 62 seconds versus 64-69 seconds on perf.rlo. It is unlikely that this would improve before we make incr. comp. the default. We need to decide if this is a justifiable price to pay for improvements in other projects. - [ ] Even if incremental compilation becomes the default, one can still always opt out of it via the `CARGO_INCREMENTAL` flag or a local Cargo config. However, this might not be common knowledge, the same as it isn't common knowledge that one can improve runtime performance by forcing the compiler to use just one codegen unit. - [ ] It still needs to be verified that runtime performance of compiled artifacts does not suffer too much from switching to incremental compilation (see below). ## Data on runtime performance of incrementally compiled release artifacts Apart from anectodal evidence that runtime performance is "roughly the same" there have been two attempts to measure this in a more reliable way: 1. PR #56678 did an experiment where we compiled the compiler itself incrementally and then tested how the compiler's runtime performance was affected by this. The results are twofold: 1. In general performance drops by **1-2%** ([compare results](https://perf.rust-lang.org/compare.html?start=3a3121337122637fa11f0e5d42aec67551e8c125&end=26f96e5eea2d6d088fd20ebc14dc90bdf123e4a1) for `clean` builds) 2. For two of the small test cases (`helloworld`, `unify-linearly`) performance drops by 30%. It is known that these test cases are very sensitive to LLVM making the right inlining decisions, which we already saw when switching from single-CGU to non-incremental ThinLTO. This is indicative that microbenchmarks may see performance drops unless the author of the benchmark takes care of marking bottleneck functions with `#[inline]`. 2. For a limited period of time we made incremental compilation the default in Cargo (https://github.com/rust-lang/cargo/pull/6564) in order to see how this affected measurements on [lolbench.rs](https://lolbench.rs). It is not yet clear if the experiment succeeded and how much useful data it collected since we had to cut it short because of a regression (#57947). The initial data looks promising: only a handful of the ~600 benchmarks showed performance losses (see https://lolbench.rs/#nightly-2019-01-27). But we need further investigation on how reliable the results are. We might also want to re-run the experiment since the regression can easily be avoided. One more experiment we should do is compiling Firefox because it is a large Rust codebase with an excellent benchmarking infrastructure (cc @nnethercote). cc @rust-lang/core @rust-lang/cargo @rust-lang/compiler
non_main
tracking issue for making incremental compilation the default for release builds since incremental compilation supports being used in conjunction with thinlto the runtime performance of incrementally built artifacts is presumably roughly on par with non incrementally built code at the same time building things incrementally often is significantly faster according to perf rlo as a consequence it might be a good idea to make cargo default to incremental compilation for release builds possible caveats that need to be resolved the initial build is slightly slower with incremental compilation usually around we need to decide if this is a worthwhile tradeoff for debug and check builds everybody seems to be fine with this already some crates like style servo are always slower to compile with incr comp even if there is just a small change in the case of style servo that is seconds versus seconds on perf rlo it is unlikely that this would improve before we make incr comp the default we need to decide if this is a justifiable price to pay for improvements in other projects even if incremental compilation becomes the default one can still always opt out of it via the cargo incremental flag or a local cargo config however this might not be common knowledge the same as it isn t common knowledge that one can improve runtime performance by forcing the compiler to use just one codegen unit it still needs to be verified that runtime performance of compiled artifacts does not suffer too much from switching to incremental compilation see below data on runtime performance of incrementally compiled release artifacts apart from anectodal evidence that runtime performance is roughly the same there have been two attempts to measure this in a more reliable way pr did an experiment where we compiled the compiler itself incrementally and then tested how the compiler s runtime performance was affected by this the results are twofold in general performance drops by for clean builds for two of the small test cases helloworld unify linearly performance drops by it is known that these test cases are very sensitive to llvm making the right inlining decisions which we already saw when switching from single cgu to non incremental thinlto this is indicative that microbenchmarks may see performance drops unless the author of the benchmark takes care of marking bottleneck functions with for a limited period of time we made incremental compilation the default in cargo in order to see how this affected measurements on it is not yet clear if the experiment succeeded and how much useful data it collected since we had to cut it short because of a regression the initial data looks promising only a handful of the benchmarks showed performance losses see but we need further investigation on how reliable the results are we might also want to re run the experiment since the regression can easily be avoided one more experiment we should do is compiling firefox because it is a large rust codebase with an excellent benchmarking infrastructure cc nnethercote cc rust lang core rust lang cargo rust lang compiler
0
4,618
23,923,101,856
IssuesEvent
2022-09-09 19:03:13
NixOS/nixpkgs
https://api.github.com/repos/NixOS/nixpkgs
opened
RFC: Provide latest versions of some Haskell executables at the top level
0.kind: question 6.topic: haskell 9.needs: maintainer
Since we are quite often stuck on Stackage LTS (for better or worse), we tend to get stuck with certain major versions of packages that serve as both executables and libraries. We need to follow Stackage LTS to avoid breaking builds in `haskellPackages`, but normal users may want the latest and greatest versions of certain executables – here the API stability criteria of Stackage don't really apply. Anecdotically, users ask about latest versions of * `pandoc` * `xmonad` * `hledger` * more? The question is now: Should we provide the latest version of these packages at their top level attributes? This should be possible via the versioned attributes `hackage2nix` generates, but may require extra overrides in many cases to account for breaking changes and discrepancies with the main package set. My personal opinion is that this is a good idea, but we need to figure out, how this would work in terms of organization. I don't want to burden us (@NixOS/haskell) with even more, maybe nontrivial work, since the current amount of work is already more than enough (especially if other responsibilities get in the way). Ideally we'd have maintainers step up for the respective packages, but they would need to be * knowledgeable about Haskell in order to deal with build failures * knowledgeable about the (still underdocumented) Haskell infrastructure in nixpkgs to fix said failures * somewhat available, as build failures may crop up as we update the Haskell packages set which is on a schedule of every 1-2 weeks roughly.
True
RFC: Provide latest versions of some Haskell executables at the top level - Since we are quite often stuck on Stackage LTS (for better or worse), we tend to get stuck with certain major versions of packages that serve as both executables and libraries. We need to follow Stackage LTS to avoid breaking builds in `haskellPackages`, but normal users may want the latest and greatest versions of certain executables – here the API stability criteria of Stackage don't really apply. Anecdotically, users ask about latest versions of * `pandoc` * `xmonad` * `hledger` * more? The question is now: Should we provide the latest version of these packages at their top level attributes? This should be possible via the versioned attributes `hackage2nix` generates, but may require extra overrides in many cases to account for breaking changes and discrepancies with the main package set. My personal opinion is that this is a good idea, but we need to figure out, how this would work in terms of organization. I don't want to burden us (@NixOS/haskell) with even more, maybe nontrivial work, since the current amount of work is already more than enough (especially if other responsibilities get in the way). Ideally we'd have maintainers step up for the respective packages, but they would need to be * knowledgeable about Haskell in order to deal with build failures * knowledgeable about the (still underdocumented) Haskell infrastructure in nixpkgs to fix said failures * somewhat available, as build failures may crop up as we update the Haskell packages set which is on a schedule of every 1-2 weeks roughly.
main
rfc provide latest versions of some haskell executables at the top level since we are quite often stuck on stackage lts for better or worse we tend to get stuck with certain major versions of packages that serve as both executables and libraries we need to follow stackage lts to avoid breaking builds in haskellpackages but normal users may want the latest and greatest versions of certain executables – here the api stability criteria of stackage don t really apply anecdotically users ask about latest versions of pandoc xmonad hledger more the question is now should we provide the latest version of these packages at their top level attributes this should be possible via the versioned attributes generates but may require extra overrides in many cases to account for breaking changes and discrepancies with the main package set my personal opinion is that this is a good idea but we need to figure out how this would work in terms of organization i don t want to burden us nixos haskell with even more maybe nontrivial work since the current amount of work is already more than enough especially if other responsibilities get in the way ideally we d have maintainers step up for the respective packages but they would need to be knowledgeable about haskell in order to deal with build failures knowledgeable about the still underdocumented haskell infrastructure in nixpkgs to fix said failures somewhat available as build failures may crop up as we update the haskell packages set which is on a schedule of every weeks roughly
1
970
4,709,183,715
IssuesEvent
2016-10-14 03:58:02
duckduckgo/zeroclickinfo-spice
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
closed
METAR Information: API Endpoint not working
Maintainer Input Requested
Requesting http://avwx.rest/api/metar.php?station=klax&format=JSON&options=info,translate fails to load. See what the problem is or contact avwx.rest. Thank you! ------ IA Page: http://duck.co/ia/view/metar [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @santigl
True
METAR Information: API Endpoint not working - Requesting http://avwx.rest/api/metar.php?station=klax&format=JSON&options=info,translate fails to load. See what the problem is or contact avwx.rest. Thank you! ------ IA Page: http://duck.co/ia/view/metar [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @santigl
main
metar information api endpoint not working requesting fails to load see what the problem is or contact avwx rest thank you ia page santigl
1
206
2,850,001,618
IssuesEvent
2015-05-31 06:19:07
krico/jas
https://api.github.com/repos/krico/jas
closed
Java tests fail on OSX with java.lang.OutOfMemoryError: unable to create new native thread
backend maintainer
If you look on `ps -Mef` threads seem to go up to ~3000 and then start failing. Tests fail with ``` com.jasify.schedule.appengine.util.KeyUtilTest Time elapsed: 0.003 sec <<< ERROR! java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949) at java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1590) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:333) at java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:594) at com.google.appengine.api.datastore.dev.LocalDatastoreService.startInternal(LocalDatastoreService.java:563) at com.google.appengine.api.datastore.dev.LocalDatastoreService.access$300(LocalDatastoreService.java:140) at com.google.appengine.api.datastore.dev.LocalDatastoreService$2.run(LocalDatastoreService.java:554) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.api.datastore.dev.LocalDatastoreService.start(LocalDatastoreService.java:551) at com.google.appengine.tools.development.ApiProxyLocalImpl.startServices(ApiProxyLocalImpl.java:604) at com.google.appengine.tools.development.ApiProxyLocalImpl.access$700(ApiProxyLocalImpl.java:46) at com.google.appengine.tools.development.ApiProxyLocalImpl$2.run(ApiProxyLocalImpl.java:584) at com.google.appengine.tools.development.ApiProxyLocalImpl$2.run(ApiProxyLocalImpl.java:581) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.getService(ApiProxyLocalImpl.java:580) at com.google.appengine.tools.development.testing.LocalServiceTestHelper.getLocalService(LocalServiceTestHelper.java:589) at com.google.appengine.tools.development.testing.LocalDatastoreServiceTestConfig.getLocalDatastoreService(LocalDatastoreServiceTestConfig.java:294) at com.google.appengine.tools.development.testing.LocalDatastoreServiceTestConfig.tearDown(LocalDatastoreServiceTestConfig.java:288) at com.google.appengine.tools.development.testing.LocalServiceTestHelper.tearDownService(LocalServiceTestHelper.java:548) at com.google.appengine.tools.development.testing.LocalServiceTestHelper.tearDown(LocalServiceTestHelper.java:520) at com.jasify.schedule.appengine.TestHelper.cleanupDatastore(TestHelper.java:170) at com.jasify.schedule.appengine.TestHelper.cleanupDatastore(TestHelper.java:166) at com.jasify.schedule.appengine.util.KeyUtilTest.cleanupDatastore(KeyUtilTest.java:55) ```
True
Java tests fail on OSX with java.lang.OutOfMemoryError: unable to create new native thread - If you look on `ps -Mef` threads seem to go up to ~3000 and then start failing. Tests fail with ``` com.jasify.schedule.appengine.util.KeyUtilTest Time elapsed: 0.003 sec <<< ERROR! java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:714) at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949) at java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1590) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:333) at java.util.concurrent.ScheduledThreadPoolExecutor.scheduleWithFixedDelay(ScheduledThreadPoolExecutor.java:594) at com.google.appengine.api.datastore.dev.LocalDatastoreService.startInternal(LocalDatastoreService.java:563) at com.google.appengine.api.datastore.dev.LocalDatastoreService.access$300(LocalDatastoreService.java:140) at com.google.appengine.api.datastore.dev.LocalDatastoreService$2.run(LocalDatastoreService.java:554) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.api.datastore.dev.LocalDatastoreService.start(LocalDatastoreService.java:551) at com.google.appengine.tools.development.ApiProxyLocalImpl.startServices(ApiProxyLocalImpl.java:604) at com.google.appengine.tools.development.ApiProxyLocalImpl.access$700(ApiProxyLocalImpl.java:46) at com.google.appengine.tools.development.ApiProxyLocalImpl$2.run(ApiProxyLocalImpl.java:584) at com.google.appengine.tools.development.ApiProxyLocalImpl$2.run(ApiProxyLocalImpl.java:581) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.getService(ApiProxyLocalImpl.java:580) at com.google.appengine.tools.development.testing.LocalServiceTestHelper.getLocalService(LocalServiceTestHelper.java:589) at com.google.appengine.tools.development.testing.LocalDatastoreServiceTestConfig.getLocalDatastoreService(LocalDatastoreServiceTestConfig.java:294) at com.google.appengine.tools.development.testing.LocalDatastoreServiceTestConfig.tearDown(LocalDatastoreServiceTestConfig.java:288) at com.google.appengine.tools.development.testing.LocalServiceTestHelper.tearDownService(LocalServiceTestHelper.java:548) at com.google.appengine.tools.development.testing.LocalServiceTestHelper.tearDown(LocalServiceTestHelper.java:520) at com.jasify.schedule.appengine.TestHelper.cleanupDatastore(TestHelper.java:170) at com.jasify.schedule.appengine.TestHelper.cleanupDatastore(TestHelper.java:166) at com.jasify.schedule.appengine.util.KeyUtilTest.cleanupDatastore(KeyUtilTest.java:55) ```
main
java tests fail on osx with java lang outofmemoryerror unable to create new native thread if you look on ps mef threads seem to go up to and then start failing tests fail with com jasify schedule appengine util keyutiltest time elapsed sec error java lang outofmemoryerror unable to create new native thread at java lang thread native method at java lang thread start thread java at java util concurrent threadpoolexecutor addworker threadpoolexecutor java at java util concurrent threadpoolexecutor ensureprestart threadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor delayedexecute scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor schedulewithfixeddelay scheduledthreadpoolexecutor java at com google appengine api datastore dev localdatastoreservice startinternal localdatastoreservice java at com google appengine api datastore dev localdatastoreservice access localdatastoreservice java at com google appengine api datastore dev localdatastoreservice run localdatastoreservice java at java security accesscontroller doprivileged native method at com google appengine api datastore dev localdatastoreservice start localdatastoreservice java at com google appengine tools development apiproxylocalimpl startservices apiproxylocalimpl java at com google appengine tools development apiproxylocalimpl access apiproxylocalimpl java at com google appengine tools development apiproxylocalimpl run apiproxylocalimpl java at com google appengine tools development apiproxylocalimpl run apiproxylocalimpl java at java security accesscontroller doprivileged native method at com google appengine tools development apiproxylocalimpl getservice apiproxylocalimpl java at com google appengine tools development testing localservicetesthelper getlocalservice localservicetesthelper java at com google appengine tools development testing localdatastoreservicetestconfig getlocaldatastoreservice localdatastoreservicetestconfig java at com google appengine tools development testing localdatastoreservicetestconfig teardown localdatastoreservicetestconfig java at com google appengine tools development testing localservicetesthelper teardownservice localservicetesthelper java at com google appengine tools development testing localservicetesthelper teardown localservicetesthelper java at com jasify schedule appengine testhelper cleanupdatastore testhelper java at com jasify schedule appengine testhelper cleanupdatastore testhelper java at com jasify schedule appengine util keyutiltest cleanupdatastore keyutiltest java
1
1,194
5,113,484,771
IssuesEvent
2017-01-06 15:29:31
aroberge/reeborg
https://api.github.com/repos/aroberge/reeborg
closed
Replace wall drawing by tiles
easier to maintain enhancement
Currently, walls are drawn as small rectangles on the canvas. It might make more sense to replace them by tiles (small square images) so as to simplify the drawing methods. When editing a world, instead of changing the wall colour, we could simply change the background colour around the world or something similar. This would allow more customization in the look of the walls as well (perhaps replace them by hedges, etc.)
True
Replace wall drawing by tiles - Currently, walls are drawn as small rectangles on the canvas. It might make more sense to replace them by tiles (small square images) so as to simplify the drawing methods. When editing a world, instead of changing the wall colour, we could simply change the background colour around the world or something similar. This would allow more customization in the look of the walls as well (perhaps replace them by hedges, etc.)
main
replace wall drawing by tiles currently walls are drawn as small rectangles on the canvas it might make more sense to replace them by tiles small square images so as to simplify the drawing methods when editing a world instead of changing the wall colour we could simply change the background colour around the world or something similar this would allow more customization in the look of the walls as well perhaps replace them by hedges etc
1
628,367
19,984,260,224
IssuesEvent
2022-01-30 12:06:31
slsdetectorgroup/slsDetectorPackage
https://api.github.com/repos/slsdetectorgroup/slsDetectorPackage
opened
MOENCH - unable to run blocking acquire() with an asyncio
action - Bug priority - Unclassified status - Pending
<!-- Preview changes before submitting --> <!-- Please fill out everything with an *, as this report will be discarded otherwise --> <!-- This is a comment, the syntax is a bit different from c++ or bash --> ##### *Distribution: <!-- RHEL7, RHEL6, Fedora, etc --> RHEL7 ##### *Detector type: <!-- If applicable, Eiger, Jungfrau, Mythen3, Gotthard2, Gotthard, Moench, ChipTestBoard --> MOENCH ##### *Software Package Version: <!-- developer, 4.2.0, 4.1.1, etc --> 0x210225 ##### Priority: <!-- Super Low, Low, Medium, High, Super High --> medium ##### *Describe the bug <!-- A clear and concise description of what the bug is --> We would be happy to control our moench detector from a tango server. There is a limitiation for a single command execution time (< 3 sec). So, I would like to use a async call for an `moench.acquire()` command. Pytango recommend to use an `asyncio` python library. So do I: ```python async def _async_acquire(self, loop): self.set_state(DevState.RUNNING) m = Moench() await loop.run_in_executor(None, m.acquire) # works fine for time.sleep() or input() self.set_state(DevState.ON) @command async def async_acquire(self): loop = asyncio.get_event_loop() future = loop.create_task(self._async_acquire(loop)) ``` the command doesn't run in async way. Moreover, this example works fine: ```python def block_acquire(self): self.set_state(DevState.RUNNING) m = Moench() exptime = m.exptime frames = m.frames m.startDetector() m.startReceiver() time.sleep(exptime * frames) while m.status != runStatus.IDLE: time.sleep(0.1) m.stopReceiver() self.set_state(DevState.ON) async def _async_acquire(self, loop): await loop.run_in_executor(None, self.block_acquire) # MUST BE WORKING @command async def async_acquire(self): loop = asyncio.get_event_loop() future = loop.create_task(self._async_acquire(loop)) ``` ##### Expected behavior <!-- A clear and concise description of what you expected to happen. --> I suspect that the problem arises in the detector API function call, because a given example above works for any other blocking function call (I have tested it with a `time.sleep()` and `input()` as well. Probably some extra flag is required for `pybind` interface. ##### To Reproduce <!-- Steps to reproduce the behavior: --> <!-- 1. Go to '...' --> <!-- 2. Click on '....' --> <!-- 3. Scroll down to '....' --> <!-- 4. See error --> ##### Screenshots <!-- If applicable, add screenshots to help explain your problem. --> ##### Additional context <!-- Add any other context about the problem here. -->
1.0
MOENCH - unable to run blocking acquire() with an asyncio - <!-- Preview changes before submitting --> <!-- Please fill out everything with an *, as this report will be discarded otherwise --> <!-- This is a comment, the syntax is a bit different from c++ or bash --> ##### *Distribution: <!-- RHEL7, RHEL6, Fedora, etc --> RHEL7 ##### *Detector type: <!-- If applicable, Eiger, Jungfrau, Mythen3, Gotthard2, Gotthard, Moench, ChipTestBoard --> MOENCH ##### *Software Package Version: <!-- developer, 4.2.0, 4.1.1, etc --> 0x210225 ##### Priority: <!-- Super Low, Low, Medium, High, Super High --> medium ##### *Describe the bug <!-- A clear and concise description of what the bug is --> We would be happy to control our moench detector from a tango server. There is a limitiation for a single command execution time (< 3 sec). So, I would like to use a async call for an `moench.acquire()` command. Pytango recommend to use an `asyncio` python library. So do I: ```python async def _async_acquire(self, loop): self.set_state(DevState.RUNNING) m = Moench() await loop.run_in_executor(None, m.acquire) # works fine for time.sleep() or input() self.set_state(DevState.ON) @command async def async_acquire(self): loop = asyncio.get_event_loop() future = loop.create_task(self._async_acquire(loop)) ``` the command doesn't run in async way. Moreover, this example works fine: ```python def block_acquire(self): self.set_state(DevState.RUNNING) m = Moench() exptime = m.exptime frames = m.frames m.startDetector() m.startReceiver() time.sleep(exptime * frames) while m.status != runStatus.IDLE: time.sleep(0.1) m.stopReceiver() self.set_state(DevState.ON) async def _async_acquire(self, loop): await loop.run_in_executor(None, self.block_acquire) # MUST BE WORKING @command async def async_acquire(self): loop = asyncio.get_event_loop() future = loop.create_task(self._async_acquire(loop)) ``` ##### Expected behavior <!-- A clear and concise description of what you expected to happen. --> I suspect that the problem arises in the detector API function call, because a given example above works for any other blocking function call (I have tested it with a `time.sleep()` and `input()` as well. Probably some extra flag is required for `pybind` interface. ##### To Reproduce <!-- Steps to reproduce the behavior: --> <!-- 1. Go to '...' --> <!-- 2. Click on '....' --> <!-- 3. Scroll down to '....' --> <!-- 4. See error --> ##### Screenshots <!-- If applicable, add screenshots to help explain your problem. --> ##### Additional context <!-- Add any other context about the problem here. -->
non_main
moench unable to run blocking acquire with an asyncio distribution detector type moench software package version priority medium describe the bug we would be happy to control our moench detector from a tango server there is a limitiation for a single command execution time sec so i would like to use a async call for an moench acquire command pytango recommend to use an asyncio python library so do i python async def async acquire self loop self set state devstate running m moench await loop run in executor none m acquire works fine for time sleep or input self set state devstate on command async def async acquire self loop asyncio get event loop future loop create task self async acquire loop the command doesn t run in async way moreover this example works fine python def block acquire self self set state devstate running m moench exptime m exptime frames m frames m startdetector m startreceiver time sleep exptime frames while m status runstatus idle time sleep m stopreceiver self set state devstate on async def async acquire self loop await loop run in executor none self block acquire must be working command async def async acquire self loop asyncio get event loop future loop create task self async acquire loop expected behavior i suspect that the problem arises in the detector api function call because a given example above works for any other blocking function call i have tested it with a time sleep and input as well probably some extra flag is required for pybind interface to reproduce screenshots additional context
0
5,528
27,636,162,802
IssuesEvent
2023-03-10 14:36:12
viperproject/VerifiedSCION
https://api.github.com/repos/viperproject/VerifiedSCION
closed
Run Gobra and tests on the CI
maintainability
We should add two kinds of steps in the CI: - [x] Verify all packages that are marked for verification - [x] For this, we should fix the version of gobra and gobra-action used in the CI, and always strive to use the latest - [x] Ensure that the verified code is still compiling - [ ] Run all test-files (ending in sufix `_test.go`) to guarantee that everything is performing as expected
True
Run Gobra and tests on the CI - We should add two kinds of steps in the CI: - [x] Verify all packages that are marked for verification - [x] For this, we should fix the version of gobra and gobra-action used in the CI, and always strive to use the latest - [x] Ensure that the verified code is still compiling - [ ] Run all test-files (ending in sufix `_test.go`) to guarantee that everything is performing as expected
main
run gobra and tests on the ci we should add two kinds of steps in the ci verify all packages that are marked for verification for this we should fix the version of gobra and gobra action used in the ci and always strive to use the latest ensure that the verified code is still compiling run all test files ending in sufix test go to guarantee that everything is performing as expected
1
4,213
20,778,253,093
IssuesEvent
2022-03-16 12:36:05
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
Mounting docker container image stuck
area/docker maintainer/need-followup
Hi, I'm using `aws-sam-cli`, version 1.40.1, to build my AWS Lambda through a GitHub action. while running this command: `sam build --template ${SAM_TEMPLATE} --use-container --base-dir ./lambda --debug` which usually took me about 1 minute more or less, it now takes almost 3 hours, and it doesn't show any errors and stuff. It happened already 3 times, starting from 3.3.22 Here is the process it's getting stuck on: 2022-03-03 15:29:42,720 | Mounting /home/runner/work/..... as /tmp/samcli/source:ro,delegated inside runtime container Unfortunately, I don't have any more traceback and stuff to show, because there isn't. Environment: Ubuntu 20.04 1.40.1 us-east1 Thank you :)
True
Mounting docker container image stuck - Hi, I'm using `aws-sam-cli`, version 1.40.1, to build my AWS Lambda through a GitHub action. while running this command: `sam build --template ${SAM_TEMPLATE} --use-container --base-dir ./lambda --debug` which usually took me about 1 minute more or less, it now takes almost 3 hours, and it doesn't show any errors and stuff. It happened already 3 times, starting from 3.3.22 Here is the process it's getting stuck on: 2022-03-03 15:29:42,720 | Mounting /home/runner/work/..... as /tmp/samcli/source:ro,delegated inside runtime container Unfortunately, I don't have any more traceback and stuff to show, because there isn't. Environment: Ubuntu 20.04 1.40.1 us-east1 Thank you :)
main
mounting docker container image stuck hi i m using aws sam cli version to build my aws lambda through a github action while running this command sam build template sam template use container base dir lambda debug which usually took me about minute more or less it now takes almost hours and it doesn t show any errors and stuff it happened already times starting from here is the process it s getting stuck on mounting home runner work as tmp samcli source ro delegated inside runtime container unfortunately i don t have any more traceback and stuff to show because there isn t environment ubuntu us thank you
1
2,661
5,011,499,411
IssuesEvent
2016-12-13 08:06:06
Zuehlke/HouseOfCards
https://api.github.com/repos/Zuehlke/HouseOfCards
closed
Return error messages to players upon invalid moves (Set, Fold)
game-server requirement
Error messaging is currently not supported.
1.0
Return error messages to players upon invalid moves (Set, Fold) - Error messaging is currently not supported.
non_main
return error messages to players upon invalid moves set fold error messaging is currently not supported
0
351,861
25,041,806,856
IssuesEvent
2022-11-04 21:46:02
Python-Community-News/Topics
https://api.github.com/repos/Python-Community-News/Topics
closed
Update fields for episode_prep.yaml
documentation good first issue hacktoberfest-accepted
Update the yaml file at `.github/episode_prep.yaml` with the following Changes: ### Change the `title` to: `<EPISODE_TITLE>` ```title: "[Publish] Python Community <EPISODE NUMBER>" # Change to <EPISODE TITLE FROM YOUTUBE>``` ### Change the `<shownotes id>` to `<podcast>` ``` - type: input id: shownotes_id #Change this to Shownotes attributes: label: Shownotes ID ``` ### Change the `<YouTube ID>` to `<YouTube>` ``` - type: input id: youtube_id # Change to Youtube attributes: ```
1.0
Update fields for episode_prep.yaml - Update the yaml file at `.github/episode_prep.yaml` with the following Changes: ### Change the `title` to: `<EPISODE_TITLE>` ```title: "[Publish] Python Community <EPISODE NUMBER>" # Change to <EPISODE TITLE FROM YOUTUBE>``` ### Change the `<shownotes id>` to `<podcast>` ``` - type: input id: shownotes_id #Change this to Shownotes attributes: label: Shownotes ID ``` ### Change the `<YouTube ID>` to `<YouTube>` ``` - type: input id: youtube_id # Change to Youtube attributes: ```
non_main
update fields for episode prep yaml update the yaml file at github episode prep yaml with the following changes change the title to title python community change to change the to type input id shownotes id change this to shownotes attributes label shownotes id change the to type input id youtube id change to youtube attributes
0
4,190
20,378,876,369
IssuesEvent
2022-02-21 18:43:39
HPCL/code-analysis
https://api.github.com/repos/HPCL/code-analysis
closed
CWE-1048 Invokable Control Element with Large Number of Outward Calls (Excessive Coupling or Fan-out)
CLAIMED ISO/IEC 5055:2021 Operation OutwardCalls WEAKNESS CATEGORY: MAINTAINABILITY
**Usage Name** Excessive references **Reference** [https://cwe.mitre.org/data/definitions/1048](https://cwe.mitre.org/data/definitions/1048) **Roles** - the *Operation* - the *OutwardCalls* **Detection Patterns** - 8.2.127 ASCQM Limit Number of Outward Calls
True
CWE-1048 Invokable Control Element with Large Number of Outward Calls (Excessive Coupling or Fan-out) - **Usage Name** Excessive references **Reference** [https://cwe.mitre.org/data/definitions/1048](https://cwe.mitre.org/data/definitions/1048) **Roles** - the *Operation* - the *OutwardCalls* **Detection Patterns** - 8.2.127 ASCQM Limit Number of Outward Calls
main
cwe invokable control element with large number of outward calls excessive coupling or fan out usage name excessive references reference roles the operation the outwardcalls detection patterns ascqm limit number of outward calls
1
3,188
3,367,799,027
IssuesEvent
2015-11-22 13:56:28
godotengine/godot
https://api.github.com/repos/godotengine/godot
closed
Editor font setting is not applied when open gd file from FileSystem tab
bug topic:editor usability
reproduce steps 1. Set custom font at Editor Settings > Global > Font 2. Browse gd file in FileSystem panel 3. Open gd file which is not already opened.
True
Editor font setting is not applied when open gd file from FileSystem tab - reproduce steps 1. Set custom font at Editor Settings > Global > Font 2. Browse gd file in FileSystem panel 3. Open gd file which is not already opened.
non_main
editor font setting is not applied when open gd file from filesystem tab reproduce steps set custom font at editor settings global font browse gd file in filesystem panel open gd file which is not already opened
0
4,316
21,718,474,610
IssuesEvent
2022-05-10 20:32:22
aws/aws-sam-cli-app-templates
https://api.github.com/repos/aws/aws-sam-cli-app-templates
closed
Add SAM template for .NET 6 C# **Quick Start: Web Backend**
maintainer/need-response
AWS announced [the .NET 6 runtime for AWS Lambda](https://aws.amazon.com/blogs/compute/introducing-the-net-6-runtime-for-aws-lambda/) Let's add SAM template for .NET 6 C# **Quick Start: Web Backend**
True
Add SAM template for .NET 6 C# **Quick Start: Web Backend** - AWS announced [the .NET 6 runtime for AWS Lambda](https://aws.amazon.com/blogs/compute/introducing-the-net-6-runtime-for-aws-lambda/) Let's add SAM template for .NET 6 C# **Quick Start: Web Backend**
main
add sam template for net c quick start web backend aws announced let s add sam template for net c quick start web backend
1
4,161
19,977,078,479
IssuesEvent
2022-01-29 08:54:51
bromite/bromite
https://api.github.com/repos/bromite/bromite
closed
Add an option to delete cookies and site data when closing the browser
enhancement enhancement-without-maintainer needs-triage
<!-- Welcome! Thanks for taking time to submit a feature request. Have you searched the issue tracker? https://github.com/bromite/bromite/issues Have you read the F.A.Q.s? https://github.com/bromite/bromite/blob/master/FAQ.md Have you read the README? https://github.com/bromite/bromite/blob/master/README.md Have you read the Wiki? https://github.com/bromite/bromite/wiki If instead of a feature request you want to ask a question then please use the GitHub Discussions: https://github.com/bromite/bromite/discussions --> <!-- Do not submit feature requests for extensions support or adding a search engine. --> ### Is your feature request related to privacy? Yes <!-- Features that are not related to privacy are not considered. --> ### Is there a patch available for this feature somewhere? I don't know. <!-- If yes then provide URL and license information. --> ### Describe the solution you would like Everytime the browser is closed, all cookies and site data should be cleared, except those which are defined as exceptions. <!-- A clear and concise description of what you want to happen. Do not ask "I would like feature X which is available in browser Y"; such issues are closed immediately. --> ### Describe alternatives you have considered I don't have any. <!-- A clear and concise description of any alternative solutions or features you have considered. -->
True
Add an option to delete cookies and site data when closing the browser - <!-- Welcome! Thanks for taking time to submit a feature request. Have you searched the issue tracker? https://github.com/bromite/bromite/issues Have you read the F.A.Q.s? https://github.com/bromite/bromite/blob/master/FAQ.md Have you read the README? https://github.com/bromite/bromite/blob/master/README.md Have you read the Wiki? https://github.com/bromite/bromite/wiki If instead of a feature request you want to ask a question then please use the GitHub Discussions: https://github.com/bromite/bromite/discussions --> <!-- Do not submit feature requests for extensions support or adding a search engine. --> ### Is your feature request related to privacy? Yes <!-- Features that are not related to privacy are not considered. --> ### Is there a patch available for this feature somewhere? I don't know. <!-- If yes then provide URL and license information. --> ### Describe the solution you would like Everytime the browser is closed, all cookies and site data should be cleared, except those which are defined as exceptions. <!-- A clear and concise description of what you want to happen. Do not ask "I would like feature X which is available in browser Y"; such issues are closed immediately. --> ### Describe alternatives you have considered I don't have any. <!-- A clear and concise description of any alternative solutions or features you have considered. -->
main
add an option to delete cookies and site data when closing the browser welcome thanks for taking time to submit a feature request have you searched the issue tracker have you read the f a q s have you read the readme have you read the wiki if instead of a feature request you want to ask a question then please use the github discussions is your feature request related to privacy yes is there a patch available for this feature somewhere i don t know describe the solution you would like everytime the browser is closed all cookies and site data should be cleared except those which are defined as exceptions a clear and concise description of what you want to happen do not ask i would like feature x which is available in browser y such issues are closed immediately describe alternatives you have considered i don t have any
1
122,645
4,838,518,961
IssuesEvent
2016-11-09 03:55:07
Cadasta/cadasta-platform
https://api.github.com/repos/Cadasta/cadasta-platform
opened
Implement a mock ES cluster for testing purposes
high priority records
This GH issue is to support the implementation of the search feature (#825) and was discussed during the search call on Nov. 7/8. To be able to develop and improve the search function, a mock ES cluster is needed for testing purposes. This is basically a simple HTTP server that represents the ES API. For maximum flexibility, the tests should be able to dynamically program the HTTP responses that are returned by mock server in response to any HTTP request, so there is no need to actually write code that emulates the ES API. (Unless emulating the API in some fashion would simplify the test coding, of course.)
1.0
Implement a mock ES cluster for testing purposes - This GH issue is to support the implementation of the search feature (#825) and was discussed during the search call on Nov. 7/8. To be able to develop and improve the search function, a mock ES cluster is needed for testing purposes. This is basically a simple HTTP server that represents the ES API. For maximum flexibility, the tests should be able to dynamically program the HTTP responses that are returned by mock server in response to any HTTP request, so there is no need to actually write code that emulates the ES API. (Unless emulating the API in some fashion would simplify the test coding, of course.)
non_main
implement a mock es cluster for testing purposes this gh issue is to support the implementation of the search feature and was discussed during the search call on nov to be able to develop and improve the search function a mock es cluster is needed for testing purposes this is basically a simple http server that represents the es api for maximum flexibility the tests should be able to dynamically program the http responses that are returned by mock server in response to any http request so there is no need to actually write code that emulates the es api unless emulating the api in some fashion would simplify the test coding of course
0
412,946
27,881,094,434
IssuesEvent
2023-03-21 19:28:11
bounswe/bounswe2023group6
https://api.github.com/repos/bounswe/bounswe2023group6
closed
Edit questions about requirements and general aspects of project to ask to TA.
type: documentation priority: high status: inprogress area: meeting
### Problem We decided some questions about requirements and general aspects of project. Before asking to TA , we will edit questions.[](url) ### Solution We will meet in Discord and edit questions together. ### Documentation https://docs.google.com/document/d/1iSIr5YIwcGAGQxxcSxFYsV0xnc8BUettnlRPUugf0_s/edit ### Additional notes _No response_ ### Reviewers _No response_ ### Deadline 21.03.2023 - Tuesday - 23.59
1.0
Edit questions about requirements and general aspects of project to ask to TA. - ### Problem We decided some questions about requirements and general aspects of project. Before asking to TA , we will edit questions.[](url) ### Solution We will meet in Discord and edit questions together. ### Documentation https://docs.google.com/document/d/1iSIr5YIwcGAGQxxcSxFYsV0xnc8BUettnlRPUugf0_s/edit ### Additional notes _No response_ ### Reviewers _No response_ ### Deadline 21.03.2023 - Tuesday - 23.59
non_main
edit questions about requirements and general aspects of project to ask to ta problem we decided some questions about requirements and general aspects of project before asking to ta we will edit questions url solution we will meet in discord and edit questions together documentation additional notes no response reviewers no response deadline tuesday
0
697
4,264,189,365
IssuesEvent
2016-07-12 05:40:43
duckduckgo/zeroclickinfo-spice
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
closed
BaconIpsum: Does not give an answer on live server
Maintainer Input Requested
The IA is not triggering on the live server but gives an answer on the beta server: https://beta.duckduckgo.com/?q=baconipsum+4&ia=baconipsum Its in the docs as an example of text template group. ------ IA Page: http://duck.co/ia/view/bacon_ipsum [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @puskin94
True
BaconIpsum: Does not give an answer on live server - The IA is not triggering on the live server but gives an answer on the beta server: https://beta.duckduckgo.com/?q=baconipsum+4&ia=baconipsum Its in the docs as an example of text template group. ------ IA Page: http://duck.co/ia/view/bacon_ipsum [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @puskin94
main
baconipsum does not give an answer on live server the ia is not triggering on the live server but gives an answer on the beta server its in the docs as an example of text template group ia page
1
34,446
12,288,115,917
IssuesEvent
2020-05-09 15:19:53
Zymergen/hubot-docker
https://api.github.com/repos/Zymergen/hubot-docker
opened
CVE-2018-16492 (High) detected in extend-3.0.1.tgz
security vulnerability
## CVE-2018-16492 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>extend-3.0.1.tgz</b></p></summary> <p>Port of jQuery.extend for node.js and the browser</p> <p>Library home page: <a href="https://registry.npmjs.org/extend/-/extend-3.0.1.tgz">https://registry.npmjs.org/extend/-/extend-3.0.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/hubot-docker/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/hubot-docker/node_modules/grpc/node_modules/extend/package.json</p> <p> Dependency Hierarchy: - hubot-assistant-2.0.4.tgz (Root Library) - google-assistant-0.2.2.tgz - grpc-1.8.0.tgz - node-pre-gyp-0.6.39.tgz - request-2.81.0.tgz - :x: **extend-3.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Zymergen/hubot-docker/commit/07953cb6bb385a84410fb77bc2c3d2ff16dee495">07953cb6bb385a84410fb77bc2c3d2ff16dee495</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A prototype pollution vulnerability was found in module extend <2.0.2, ~<3.0.2 that allows an attacker to inject arbitrary properties onto Object.prototype. <p>Publish Date: 2019-02-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16492>CVE-2018-16492</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://hackerone.com/reports/381185">https://hackerone.com/reports/381185</a></p> <p>Release Date: 2019-02-01</p> <p>Fix Resolution: extend - v3.0.2,v2.0.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"extend","packageVersion":"3.0.1","isTransitiveDependency":true,"dependencyTree":"hubot-assistant:2.0.4;google-assistant:0.2.2;grpc:1.8.0;node-pre-gyp:0.6.39;request:2.81.0;extend:3.0.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"extend - v3.0.2,v2.0.2"}],"vulnerabilityIdentifier":"CVE-2018-16492","vulnerabilityDetails":"A prototype pollution vulnerability was found in module extend \u003c2.0.2, ~\u003c3.0.2 that allows an attacker to inject arbitrary properties onto Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16492","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2018-16492 (High) detected in extend-3.0.1.tgz - ## CVE-2018-16492 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>extend-3.0.1.tgz</b></p></summary> <p>Port of jQuery.extend for node.js and the browser</p> <p>Library home page: <a href="https://registry.npmjs.org/extend/-/extend-3.0.1.tgz">https://registry.npmjs.org/extend/-/extend-3.0.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/hubot-docker/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/hubot-docker/node_modules/grpc/node_modules/extend/package.json</p> <p> Dependency Hierarchy: - hubot-assistant-2.0.4.tgz (Root Library) - google-assistant-0.2.2.tgz - grpc-1.8.0.tgz - node-pre-gyp-0.6.39.tgz - request-2.81.0.tgz - :x: **extend-3.0.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Zymergen/hubot-docker/commit/07953cb6bb385a84410fb77bc2c3d2ff16dee495">07953cb6bb385a84410fb77bc2c3d2ff16dee495</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A prototype pollution vulnerability was found in module extend <2.0.2, ~<3.0.2 that allows an attacker to inject arbitrary properties onto Object.prototype. <p>Publish Date: 2019-02-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16492>CVE-2018-16492</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://hackerone.com/reports/381185">https://hackerone.com/reports/381185</a></p> <p>Release Date: 2019-02-01</p> <p>Fix Resolution: extend - v3.0.2,v2.0.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"extend","packageVersion":"3.0.1","isTransitiveDependency":true,"dependencyTree":"hubot-assistant:2.0.4;google-assistant:0.2.2;grpc:1.8.0;node-pre-gyp:0.6.39;request:2.81.0;extend:3.0.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"extend - v3.0.2,v2.0.2"}],"vulnerabilityIdentifier":"CVE-2018-16492","vulnerabilityDetails":"A prototype pollution vulnerability was found in module extend \u003c2.0.2, ~\u003c3.0.2 that allows an attacker to inject arbitrary properties onto Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16492","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_main
cve high detected in extend tgz cve high severity vulnerability vulnerable library extend tgz port of jquery extend for node js and the browser library home page a href path to dependency file tmp ws scm hubot docker package json path to vulnerable library tmp ws scm hubot docker node modules grpc node modules extend package json dependency hierarchy hubot assistant tgz root library google assistant tgz grpc tgz node pre gyp tgz request tgz x extend tgz vulnerable library found in head commit a href vulnerability details a prototype pollution vulnerability was found in module extend that allows an attacker to inject arbitrary properties onto object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution extend isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a prototype pollution vulnerability was found in module extend that allows an attacker to inject arbitrary properties onto object prototype vulnerabilityurl
0
71,582
7,248,059,550
IssuesEvent
2018-02-15 07:34:58
Insidious611/DancingMadFF6
https://api.github.com/repos/Insidious611/DancingMadFF6
closed
Test installer for Hotfix One completely.
installer testing log
Testing includes, in no particular order, new developer selection, new custom options (in various combinations), AWS mirror, new custom track selections, and new source presets. In addition, x86 (32-bit) installer also needs to be thoroughly tested once made.
1.0
Test installer for Hotfix One completely. - Testing includes, in no particular order, new developer selection, new custom options (in various combinations), AWS mirror, new custom track selections, and new source presets. In addition, x86 (32-bit) installer also needs to be thoroughly tested once made.
non_main
test installer for hotfix one completely testing includes in no particular order new developer selection new custom options in various combinations aws mirror new custom track selections and new source presets in addition bit installer also needs to be thoroughly tested once made
0
4,407
22,634,276,877
IssuesEvent
2022-06-30 17:18:32
tethysplatform/tethys
https://api.github.com/repos/tethysplatform/tethys
closed
Tethys Developer Version Installation 'staticfiles' is not a registered tag library
maintain dependencies
I have installed the version the development version using miniconda with the following command : ```bash conda create -n tethys -c tethysplatform/label/dev -c tethysplatform -c conda-forge tethys-platform tethys gen portal_config tethys db configure ``` This installation comes with python 3.10.4 However, when I start the Tethys with the command `tethys manage start` I get the following error: ```bash (tethys) [gio@gio tethys]$ tethys manage start Loading Tethys Extensions... Loading Tethys Apps... Loading Tethys Extensions... Loading Tethys Apps... Performing system checks... System check identified no issues (0 silenced). April 20, 2022 - 21:01:14 Django version 3.2.12, using settings 'tethys_portal.settings' Starting ASGI/Channels version 3.0.4 development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. ERROR:django.request:Internal Server Error: / Traceback (most recent call last): File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py", line 1037, in find_library return parser.libraries[name] KeyError: 'staticfiles' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py", line 451, in thread_handler raise exc_info[1] File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/core/handlers/exception.py", line 38, in inner response = await get_response(request) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/core/handlers/base.py", line 233, in _get_response_async response = await wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py", line 414, in __call__ ret = await asyncio.wait_for(future, timeout=None) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/asyncio/tasks.py", line 408, in wait_for return await fut File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/current_thread_executor.py", line 22, in run result = self.fn(*self.args, **self.kwargs) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py", line 455, in thread_handler return func(*args, **kwargs) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/tethys_portal/views/home.py", line 29, in home return render(request, template, {"ENABLE_OPEN_SIGNUP": settings.ENABLE_OPEN_SIGNUP, File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/shortcuts.py", line 19, in render content = loader.render_to_string(template_name, context, request, using=using) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader.py", line 62, in render_to_string return template.render(context, request) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/backends/django.py", line 61, in render return self.template.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 170, in render return self._render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 162, in _render return self.nodelist.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 938, in render bit = node.render_annotated(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 905, in render_annotated return self.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py", line 150, in render return compiled_parent._render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 162, in _render return self.nodelist.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 938, in render bit = node.render_annotated(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 905, in render_annotated return self.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py", line 62, in render result = block.nodelist.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 938, in render bit = node.render_annotated(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 905, in render_annotated return self.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py", line 183, in render template = context.template.engine.select_template(template_name) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py", line 174, in select_template return self.get_template(template_name) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py", line 143, in get_template template, origin = self.find_template(template_name) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py", line 125, in find_template template = loader.get_template(name, skip=skip) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loaders/base.py", line 29, in get_template return Template( File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 155, in __init__ self.nodelist = self.compile_nodelist() File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 193, in compile_nodelist return parser.parse() File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 478, in parse raise self.error(token, e) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 476, in parse compiled_result = compile_func(self, token) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py", line 1088, in load lib = find_library(parser, name) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py", line 1039, in find_library raise TemplateSyntaxError( django.template.exceptions.TemplateSyntaxError: 'staticfiles' is not a registered tag library. Must be one of: admin_list admin_modify admin_urls analytical cache chartbeat clickmap clicky crazy_egg django_bootstrap5 facebook_pixel gauges google_analytics google_analytics_js gosquared gravatar guardian_tags hotjar hubspot humanize i18n intercom kiss_insights kiss_metrics l10n log mixpanel olark optimizely performable piwik rating_mailru recaptcha2 rest_framework session_security_tags site_settings snapengage spring_metrics static tags terms_tags tethys_gizmos tethys_services tz uservoice woopra yandex_metrica ERROR:django.channels.server:HTTP GET / 500 [0.07, 127.0.0.1:38068] ``` I am able to fix this by editing the file at In template /home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/session_security/templates/session_security/all.html, error at line containing `{% load static from staticfiles %}` ```html {% comment %} This demonstrates how to setup session security client side stuff on your own. It provides sensible defaults so you could start with just:: {% include 'session_security/all.html' %} {% endcomment %}  {% load session_security_tags %} {% load i18n l10n %} {% load static from staticfiles %}   {# If the user is not authenticated then there is no session to secure ! #} {% if request.user.is_authenticated %}   {# The modal dialog stylesheet, it's pretty light so it should be easy to hack #} <link rel="stylesheet" type="text/css" href="{% static 'session_security/style.css' %}">   {# Include the template that actually contains the modal dialog #} {% include 'session_security/dialog.html' %} ``` I would like to know if there is another fix besides editing the file directly
True
Tethys Developer Version Installation 'staticfiles' is not a registered tag library - I have installed the version the development version using miniconda with the following command : ```bash conda create -n tethys -c tethysplatform/label/dev -c tethysplatform -c conda-forge tethys-platform tethys gen portal_config tethys db configure ``` This installation comes with python 3.10.4 However, when I start the Tethys with the command `tethys manage start` I get the following error: ```bash (tethys) [gio@gio tethys]$ tethys manage start Loading Tethys Extensions... Loading Tethys Apps... Loading Tethys Extensions... Loading Tethys Apps... Performing system checks... System check identified no issues (0 silenced). April 20, 2022 - 21:01:14 Django version 3.2.12, using settings 'tethys_portal.settings' Starting ASGI/Channels version 3.0.4 development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C. ERROR:django.request:Internal Server Error: / Traceback (most recent call last): File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py", line 1037, in find_library return parser.libraries[name] KeyError: 'staticfiles' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py", line 451, in thread_handler raise exc_info[1] File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/core/handlers/exception.py", line 38, in inner response = await get_response(request) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/core/handlers/base.py", line 233, in _get_response_async response = await wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py", line 414, in __call__ ret = await asyncio.wait_for(future, timeout=None) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/asyncio/tasks.py", line 408, in wait_for return await fut File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/current_thread_executor.py", line 22, in run result = self.fn(*self.args, **self.kwargs) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/asgiref/sync.py", line 455, in thread_handler return func(*args, **kwargs) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/tethys_portal/views/home.py", line 29, in home return render(request, template, {"ENABLE_OPEN_SIGNUP": settings.ENABLE_OPEN_SIGNUP, File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/shortcuts.py", line 19, in render content = loader.render_to_string(template_name, context, request, using=using) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader.py", line 62, in render_to_string return template.render(context, request) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/backends/django.py", line 61, in render return self.template.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 170, in render return self._render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 162, in _render return self.nodelist.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 938, in render bit = node.render_annotated(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 905, in render_annotated return self.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py", line 150, in render return compiled_parent._render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 162, in _render return self.nodelist.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 938, in render bit = node.render_annotated(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 905, in render_annotated return self.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py", line 62, in render result = block.nodelist.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 938, in render bit = node.render_annotated(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 905, in render_annotated return self.render(context) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loader_tags.py", line 183, in render template = context.template.engine.select_template(template_name) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py", line 174, in select_template return self.get_template(template_name) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py", line 143, in get_template template, origin = self.find_template(template_name) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/engine.py", line 125, in find_template template = loader.get_template(name, skip=skip) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/loaders/base.py", line 29, in get_template return Template( File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 155, in __init__ self.nodelist = self.compile_nodelist() File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 193, in compile_nodelist return parser.parse() File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 478, in parse raise self.error(token, e) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/base.py", line 476, in parse compiled_result = compile_func(self, token) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py", line 1088, in load lib = find_library(parser, name) File "/home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/django/template/defaulttags.py", line 1039, in find_library raise TemplateSyntaxError( django.template.exceptions.TemplateSyntaxError: 'staticfiles' is not a registered tag library. Must be one of: admin_list admin_modify admin_urls analytical cache chartbeat clickmap clicky crazy_egg django_bootstrap5 facebook_pixel gauges google_analytics google_analytics_js gosquared gravatar guardian_tags hotjar hubspot humanize i18n intercom kiss_insights kiss_metrics l10n log mixpanel olark optimizely performable piwik rating_mailru recaptcha2 rest_framework session_security_tags site_settings snapengage spring_metrics static tags terms_tags tethys_gizmos tethys_services tz uservoice woopra yandex_metrica ERROR:django.channels.server:HTTP GET / 500 [0.07, 127.0.0.1:38068] ``` I am able to fix this by editing the file at In template /home/gio/miniconda3/envs/tethys/lib/python3.10/site-packages/session_security/templates/session_security/all.html, error at line containing `{% load static from staticfiles %}` ```html {% comment %} This demonstrates how to setup session security client side stuff on your own. It provides sensible defaults so you could start with just:: {% include 'session_security/all.html' %} {% endcomment %}  {% load session_security_tags %} {% load i18n l10n %} {% load static from staticfiles %}   {# If the user is not authenticated then there is no session to secure ! #} {% if request.user.is_authenticated %}   {# The modal dialog stylesheet, it's pretty light so it should be easy to hack #} <link rel="stylesheet" type="text/css" href="{% static 'session_security/style.css' %}">   {# Include the template that actually contains the modal dialog #} {% include 'session_security/dialog.html' %} ``` I would like to know if there is another fix besides editing the file directly
main
tethys developer version installation staticfiles is not a registered tag library i have installed the version the development version using miniconda with the following command bash conda create n tethys c tethysplatform label dev c tethysplatform c conda forge tethys platform tethys gen portal config tethys db configure this installation comes with python however when i start the tethys with the command tethys manage start i get the following error bash tethys tethys manage start loading tethys extensions loading tethys apps loading tethys extensions loading tethys apps performing system checks system check identified no issues silenced april django version using settings tethys portal settings starting asgi channels version development server at quit the server with control c error django request internal server error traceback most recent call last file home gio envs tethys lib site packages django template defaulttags py line in find library return parser libraries keyerror staticfiles during handling of the above exception another exception occurred traceback most recent call last file home gio envs tethys lib site packages asgiref sync py line in thread handler raise exc info file home gio envs tethys lib site packages django core handlers exception py line in inner response await get response request file home gio envs tethys lib site packages django core handlers base py line in get response async response await wrapped callback request callback args callback kwargs file home gio envs tethys lib site packages asgiref sync py line in call ret await asyncio wait for future timeout none file home gio envs tethys lib asyncio tasks py line in wait for return await fut file home gio envs tethys lib site packages asgiref current thread executor py line in run result self fn self args self kwargs file home gio envs tethys lib site packages asgiref sync py line in thread handler return func args kwargs file home gio envs tethys lib site packages tethys portal views home py line in home return render request template enable open signup settings enable open signup file home gio envs tethys lib site packages django shortcuts py line in render content loader render to string template name context request using using file home gio envs tethys lib site packages django template loader py line in render to string return template render context request file home gio envs tethys lib site packages django template backends django py line in render return self template render context file home gio envs tethys lib site packages django template base py line in render return self render context file home gio envs tethys lib site packages django template base py line in render return self nodelist render context file home gio envs tethys lib site packages django template base py line in render bit node render annotated context file home gio envs tethys lib site packages django template base py line in render annotated return self render context file home gio envs tethys lib site packages django template loader tags py line in render return compiled parent render context file home gio envs tethys lib site packages django template base py line in render return self nodelist render context file home gio envs tethys lib site packages django template base py line in render bit node render annotated context file home gio envs tethys lib site packages django template base py line in render annotated return self render context file home gio envs tethys lib site packages django template loader tags py line in render result block nodelist render context file home gio envs tethys lib site packages django template base py line in render bit node render annotated context file home gio envs tethys lib site packages django template base py line in render annotated return self render context file home gio envs tethys lib site packages django template loader tags py line in render template context template engine select template template name file home gio envs tethys lib site packages django template engine py line in select template return self get template template name file home gio envs tethys lib site packages django template engine py line in get template template origin self find template template name file home gio envs tethys lib site packages django template engine py line in find template template loader get template name skip skip file home gio envs tethys lib site packages django template loaders base py line in get template return template file home gio envs tethys lib site packages django template base py line in init self nodelist self compile nodelist file home gio envs tethys lib site packages django template base py line in compile nodelist return parser parse file home gio envs tethys lib site packages django template base py line in parse raise self error token e file home gio envs tethys lib site packages django template base py line in parse compiled result compile func self token file home gio envs tethys lib site packages django template defaulttags py line in load lib find library parser name file home gio envs tethys lib site packages django template defaulttags py line in find library raise templatesyntaxerror django template exceptions templatesyntaxerror staticfiles is not a registered tag library must be one of admin list admin modify admin urls analytical cache chartbeat clickmap clicky crazy egg django facebook pixel gauges google analytics google analytics js gosquared gravatar guardian tags hotjar hubspot humanize intercom kiss insights kiss metrics log mixpanel olark optimizely performable piwik rating mailru rest framework session security tags site settings snapengage spring metrics static tags terms tags tethys gizmos tethys services tz uservoice woopra yandex metrica error django channels server http get i am able to fix this by editing the file at in template home gio envs tethys lib site packages session security templates session security all html error at line containing load static from staticfiles html comment this demonstrates how to setup session security client side stuff on your own it provides sensible defaults so you could start with just include session security all html endcomment   load session security tags load load static from staticfiles   if the user is not authenticated then there is no session to secure if request user is authenticated   the modal dialog stylesheet it s pretty light so it should be easy to hack   include the template that actually contains the modal dialog include session security dialog html i would like to know if there is another fix besides editing the file directly
1
211,673
7,203,349,934
IssuesEvent
2018-02-06 08:53:48
rogerthat-platform/rogerthat-android-client
https://api.github.com/repos/rogerthat-platform/rogerthat-android-client
closed
Introduce 4th row in service menu
priority_minor state_verification type_feature
See https://github.com/our-city-app/oca-backend/issues/680 Also, make sure the top padding is the same as the left padding of a service menu item (see screenshot in ticket above)
1.0
Introduce 4th row in service menu - See https://github.com/our-city-app/oca-backend/issues/680 Also, make sure the top padding is the same as the left padding of a service menu item (see screenshot in ticket above)
non_main
introduce row in service menu see also make sure the top padding is the same as the left padding of a service menu item see screenshot in ticket above
0
819,215
30,723,850,159
IssuesEvent
2023-07-27 17:59:16
janus-idp/software-templates
https://api.github.com/repos/janus-idp/software-templates
reopened
GPT: Launch an Ansible Job thru AAP
kind/epic priority/critical
## Goal _Details to follow_ ### What problem does this solve? _Details to follow_ ### Use cases _Details to follow_ ### Acceptance criteria _Details to follow_ ## Issues in Epic - https://github.com/janus-idp/backstage-showcase/issues/383 - https://github.com/janus-idp/software-templates/issues/152 [Doc] - https://github.com/janus-idp/software-templates/issues/153 [QE]
1.0
GPT: Launch an Ansible Job thru AAP - ## Goal _Details to follow_ ### What problem does this solve? _Details to follow_ ### Use cases _Details to follow_ ### Acceptance criteria _Details to follow_ ## Issues in Epic - https://github.com/janus-idp/backstage-showcase/issues/383 - https://github.com/janus-idp/software-templates/issues/152 [Doc] - https://github.com/janus-idp/software-templates/issues/153 [QE]
non_main
gpt launch an ansible job thru aap goal details to follow what problem does this solve details to follow use cases details to follow acceptance criteria details to follow issues in epic
0
476
3,738,984,964
IssuesEvent
2016-03-09 01:36:01
christoff-buerger/racr
https://api.github.com/repos/christoff-buerger/racr
closed
Clean-up of RACR-NET
maintainability medium
The current _RACR-NET_ implementation has to be cleaned-up: * `Racr.cs` and `Test.cs` have to be cleaned-up (empty lines deleted, proper indentation etc.) * _IronScheme_ has issues with _Scheme_ source files ending in `*.scm`; it expects `*.ss` as file ending or `*.sls` for libraries. Furthermore, its good practice to compile _.NET_ libraries to dynamic linked libraries (`dll` assemblies). Maybe the `install-libraries.bash` script can be extended to generate respective `dll` assemblies. * The tests have to incorporate the most recent example refactorings, in particular regarding the Petri nets example (cf. issue #41). * Regarding the `mathexp` example used for profiling the overhead of _RACR-NET_: * It contains huge generated test cases. These generated files have to be deleted from the repository. * The pure _Scheme_ implementation should become a library. * The _Scheme_ and _C#_ implementations should become separate projects, each within its own directory (`mathexpr` and `mathexpr-net` respectively). Both projects should be subdirectories of `profiling/racr-net-overhead`.
True
Clean-up of RACR-NET - The current _RACR-NET_ implementation has to be cleaned-up: * `Racr.cs` and `Test.cs` have to be cleaned-up (empty lines deleted, proper indentation etc.) * _IronScheme_ has issues with _Scheme_ source files ending in `*.scm`; it expects `*.ss` as file ending or `*.sls` for libraries. Furthermore, its good practice to compile _.NET_ libraries to dynamic linked libraries (`dll` assemblies). Maybe the `install-libraries.bash` script can be extended to generate respective `dll` assemblies. * The tests have to incorporate the most recent example refactorings, in particular regarding the Petri nets example (cf. issue #41). * Regarding the `mathexp` example used for profiling the overhead of _RACR-NET_: * It contains huge generated test cases. These generated files have to be deleted from the repository. * The pure _Scheme_ implementation should become a library. * The _Scheme_ and _C#_ implementations should become separate projects, each within its own directory (`mathexpr` and `mathexpr-net` respectively). Both projects should be subdirectories of `profiling/racr-net-overhead`.
main
clean up of racr net the current racr net implementation has to be cleaned up racr cs and test cs have to be cleaned up empty lines deleted proper indentation etc ironscheme has issues with scheme source files ending in scm it expects ss as file ending or sls for libraries furthermore its good practice to compile net libraries to dynamic linked libraries dll assemblies maybe the install libraries bash script can be extended to generate respective dll assemblies the tests have to incorporate the most recent example refactorings in particular regarding the petri nets example cf issue regarding the mathexp example used for profiling the overhead of racr net it contains huge generated test cases these generated files have to be deleted from the repository the pure scheme implementation should become a library the scheme and c implementations should become separate projects each within its own directory mathexpr and mathexpr net respectively both projects should be subdirectories of profiling racr net overhead
1
4,568
23,747,643,490
IssuesEvent
2022-08-31 17:25:50
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
AWS SAM go 1.6 "embed" https://golang.org/pkg/embed/
type/feature maintainer/need-followup
### SAM supporting go 1.6 would be much helpful We need to utilize some of the new features coming in go latest version such as "embed" I wish the SAM would be able to build my code with go 1.6 features
True
AWS SAM go 1.6 "embed" https://golang.org/pkg/embed/ - ### SAM supporting go 1.6 would be much helpful We need to utilize some of the new features coming in go latest version such as "embed" I wish the SAM would be able to build my code with go 1.6 features
main
aws sam go embed sam supporting go would be much helpful we need to utilize some of the new features coming in go latest version such as embed i wish the sam would be able to build my code with go features
1
4,637
24,009,492,667
IssuesEvent
2022-09-14 17:28:20
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
Improve BreadcrumbSelector component
type: enhancement work: frontend status: ready restricted: new maintainers
## Current behavior - This is the `BreadcrumbSelector` component: ![image](https://user-images.githubusercontent.com/42411/184004827-4bab2c6c-fc23-4768-b50b-1ffd07aa8c03.png) - It is used both for selecting a Schema within the current Database and for selecting a Table or Exploration within the current Schema. ## Desired behavior - It should look more like this mockup: ![mockup](https://user-images.githubusercontent.com/42411/180277150-e68cf483-9ce5-436f-b347-714937d22295.png) Specifically... - [x] A search input should exist to filter the entries across all categories. (Done in #1551) - [x] The search input should be focused when the BreadcrumbSelector opens. (Done in #1558) - [x] Users should be able to use `Up`/`Down`/`Enter` keys to select an item while filtering. (Moved to #1646) - [x] Entries should highlight the substring of their label which matches the search query. (Done in #1620) - [x] Vertical scrolling should not happen so easily. We'll need to increase `max-height` somewhere. A good value might be something like `calc(100vh - 5em)`. (Done in #1560) - [x] When the viewport height is small enough to force vertical scrolling within the component, the search input should not scroll -- only the entries. (Done in #1620) - Additionally (not represented in the mockup) - [x] If the URL for the entry matches _the start_ of the router's current URL, then the entry should visually indicate that it's active. (It's important to match the start because we want to show the active schema when we're on the Table Page, for example.) Done in #1576 - [x] When hovered, each Table entry should have an icon button which opens the Record Selector, navigating the user to the Record Page for their selected record. For this, we'll probably want to add the following property to the `BreadcrumbSelectorEntry` interface: ```ts interface BreadcrumbSelectorEntry { // ... button?: { icon: IconProps, label: string, onClick: () => void } } ``` (Done in #1620) It's okay to create small PRs that handle only a portion of these somewhat unrelated improvements.
True
Improve BreadcrumbSelector component - ## Current behavior - This is the `BreadcrumbSelector` component: ![image](https://user-images.githubusercontent.com/42411/184004827-4bab2c6c-fc23-4768-b50b-1ffd07aa8c03.png) - It is used both for selecting a Schema within the current Database and for selecting a Table or Exploration within the current Schema. ## Desired behavior - It should look more like this mockup: ![mockup](https://user-images.githubusercontent.com/42411/180277150-e68cf483-9ce5-436f-b347-714937d22295.png) Specifically... - [x] A search input should exist to filter the entries across all categories. (Done in #1551) - [x] The search input should be focused when the BreadcrumbSelector opens. (Done in #1558) - [x] Users should be able to use `Up`/`Down`/`Enter` keys to select an item while filtering. (Moved to #1646) - [x] Entries should highlight the substring of their label which matches the search query. (Done in #1620) - [x] Vertical scrolling should not happen so easily. We'll need to increase `max-height` somewhere. A good value might be something like `calc(100vh - 5em)`. (Done in #1560) - [x] When the viewport height is small enough to force vertical scrolling within the component, the search input should not scroll -- only the entries. (Done in #1620) - Additionally (not represented in the mockup) - [x] If the URL for the entry matches _the start_ of the router's current URL, then the entry should visually indicate that it's active. (It's important to match the start because we want to show the active schema when we're on the Table Page, for example.) Done in #1576 - [x] When hovered, each Table entry should have an icon button which opens the Record Selector, navigating the user to the Record Page for their selected record. For this, we'll probably want to add the following property to the `BreadcrumbSelectorEntry` interface: ```ts interface BreadcrumbSelectorEntry { // ... button?: { icon: IconProps, label: string, onClick: () => void } } ``` (Done in #1620) It's okay to create small PRs that handle only a portion of these somewhat unrelated improvements.
main
improve breadcrumbselector component current behavior this is the breadcrumbselector component it is used both for selecting a schema within the current database and for selecting a table or exploration within the current schema desired behavior it should look more like this mockup specifically a search input should exist to filter the entries across all categories done in the search input should be focused when the breadcrumbselector opens done in users should be able to use up down enter keys to select an item while filtering moved to entries should highlight the substring of their label which matches the search query done in vertical scrolling should not happen so easily we ll need to increase max height somewhere a good value might be something like calc done in when the viewport height is small enough to force vertical scrolling within the component the search input should not scroll only the entries done in additionally not represented in the mockup if the url for the entry matches the start of the router s current url then the entry should visually indicate that it s active it s important to match the start because we want to show the active schema when we re on the table page for example done in when hovered each table entry should have an icon button which opens the record selector navigating the user to the record page for their selected record for this we ll probably want to add the following property to the breadcrumbselectorentry interface ts interface breadcrumbselectorentry button icon iconprops label string onclick void done in it s okay to create small prs that handle only a portion of these somewhat unrelated improvements
1
794
4,397,147,281
IssuesEvent
2016-08-10 06:52:20
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
Slack - markdown when color defined
bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> notification/slack.py ##### ANSIBLE VERSION ``` ansible 2.0.2.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> N/A ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> N/A ##### SUMMARY <!--- Explain the problem briefly --> When using slack notifications with the any color option besides normal causes markdown to not be respected in slack. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> ``` - name: Send Notification local_action: module: slack token: token/from/slack msg: "Ansible note for *{{ inventory_hostname }}*" parse: 'full' link_names: 1 channel: "#ansible" color: warning ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS I expect to see a message like >Ansible note for - **hostname** but instead see > Ansible note for - \*hostname\* ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> ``` ok: [localhost -> localhost] => {"changed": false, "invocation": {"module_args": {"attachments": null, "channel": "#ansible", "color": "warning", "domain": null, "icon_emoji": null, "icon_url": "http://www.ansible.com/favicon.ico", "link_names": 1, "msg": "Ansible not for - *localhost*", "parse": null, "token": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "Ansible", "validate_certs": true}, "module_name": "slack"}, "msg": "OK"} ```
True
Slack - markdown when color defined - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> notification/slack.py ##### ANSIBLE VERSION ``` ansible 2.0.2.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> N/A ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> N/A ##### SUMMARY <!--- Explain the problem briefly --> When using slack notifications with the any color option besides normal causes markdown to not be respected in slack. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> ``` - name: Send Notification local_action: module: slack token: token/from/slack msg: "Ansible note for *{{ inventory_hostname }}*" parse: 'full' link_names: 1 channel: "#ansible" color: warning ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS I expect to see a message like >Ansible note for - **hostname** but instead see > Ansible note for - \*hostname\* ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> ``` ok: [localhost -> localhost] => {"changed": false, "invocation": {"module_args": {"attachments": null, "channel": "#ansible", "color": "warning", "domain": null, "icon_emoji": null, "icon_url": "http://www.ansible.com/favicon.ico", "link_names": 1, "msg": "Ansible not for - *localhost*", "parse": null, "token": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "Ansible", "validate_certs": true}, "module_name": "slack"}, "msg": "OK"} ```
main
slack markdown when color defined issue type bug report component name notification slack py ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary when using slack notifications with the any color option besides normal causes markdown to not be respected in slack steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name send notification local action module slack token token from slack msg ansible note for inventory hostname parse full link names channel ansible color warning expected results i expect to see a message like ansible note for hostname but instead see ansible note for hostname actual results ok changed false invocation module args attachments null channel ansible color warning domain null icon emoji null icon url link names msg ansible not for localhost parse null token value specified in no log parameter username ansible validate certs true module name slack msg ok
1
3,203
12,236,610,450
IssuesEvent
2020-05-04 16:37:40
RockefellerArchiveCenter/aurora
https://api.github.com/repos/RockefellerArchiveCenter/aurora
closed
Update Aurora to Python 3
maintainability python3
## Is your feature request related to a problem? Please describe. Aurora uses Python 2, which is EOL in 2020. ## Describe the solution you'd like Update Aurora to use Python 3.
True
Update Aurora to Python 3 - ## Is your feature request related to a problem? Please describe. Aurora uses Python 2, which is EOL in 2020. ## Describe the solution you'd like Update Aurora to use Python 3.
main
update aurora to python is your feature request related to a problem please describe aurora uses python which is eol in describe the solution you d like update aurora to use python
1
942
4,668,413,316
IssuesEvent
2016-10-06 02:14:22
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
Broken show command for asa_config (and asa_acl)
affects_2.3 bug_report networking P2 waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME * asa_config * asa_acl ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` $ ansible --version ansible 2.3.0 (devel 02b08b1b0c) last updated 2016/10/05 21:30:24 (GMT +200) lib/ansible/modules/core: (detached HEAD 0ee774ff15) last updated 2016/10/05 21:30:37 (GMT +200) lib/ansible/modules/extras: (detached HEAD 5cc72c3f06) last updated 2016/10/05 21:30:38 (GMT +200) config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/a ##### OS / ENVIRONMENT N/A ##### SUMMARY The Cisco ASA devices supports two methods of displaying the current configuration. * show running-config : Shows configuration, but masks passwords. Supports adding `all` to display default configuration, i.e. `show running-config all` * more system:running-config : Shows configuration without masking passwords The more system:running is needed in some scenarios for instance if you want to create a site to site vpn connection (a tunnel-group in ASA) and include ike passwords. The modules support the argument show_command, however after the refactoring of the networking modules this command never gets used. ##### STEPS TO REPRODUCE I've run the tests from the [test-network-modules](https://github.com/ansible/test-network-modules) repo. Specifically the [more_system.yaml](https://github.com/ansible/test-network-modules/blob/devel/roles/test_asa_config/tests/cli/more_system.yaml) tests. ##### EXPECTED RESULTS The testrun should pass the idempotency test ##### ACTUAL RESULTS This is the output from the test run ``` TASK [test_asa_config : debug] ************************************************* ok: [ns2903-asa-02] => { "msg": "START cli/more_system.yaml" } TASK [test_asa_config : setup] ************************************************* changed: [ns2903-asa-02] TASK [test_asa_config : Prepare tunnel-group] ********************************** changed: [ns2903-asa-02] TASK [test_asa_config : Setup tunnel-group] ************************************ changed: [ns2903-asa-02] TASK [test_asa_config : Test idempotency] ************************************** changed: [ns2903-asa-02] TASK [test_asa_config : assert] ************************************************ fatal: [ns2903-asa-02]: FAILED! => { "assertion": "result.changed == false", "changed": false, "evaluated_to": false, "failed": true } ``` #### Errors In [asa.py](https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/asa.py) the param is available: ```Python add_argument('show_command', dict(default='show running-config', choices=['show running-config', 'more system:running-config'])) ``` However the function to get the configuration never uses this setting: ```Python def get_config(self, include_defaults=False): cmd = 'show running-config' if include_defaults: cmd += ' all' return self.run_commands(cmd)[0] ``` #### Possible solution If module_util was changed to something like this: ```Python def get_config(self, include_defaults=False, show_command='show running-config'): if show_command == 'show running-config' and include_defaults: show_command += ' all' return self.run_commands(show_command)[0] ``` And the modules were changed so that the param was sent while getting the configuration, and perhaps also used if the backup parameter was used.
True
Broken show command for asa_config (and asa_acl) - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME * asa_config * asa_acl ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` $ ansible --version ansible 2.3.0 (devel 02b08b1b0c) last updated 2016/10/05 21:30:24 (GMT +200) lib/ansible/modules/core: (detached HEAD 0ee774ff15) last updated 2016/10/05 21:30:37 (GMT +200) lib/ansible/modules/extras: (detached HEAD 5cc72c3f06) last updated 2016/10/05 21:30:38 (GMT +200) config file = configured module search path = Default w/o overrides ``` ##### CONFIGURATION N/a ##### OS / ENVIRONMENT N/A ##### SUMMARY The Cisco ASA devices supports two methods of displaying the current configuration. * show running-config : Shows configuration, but masks passwords. Supports adding `all` to display default configuration, i.e. `show running-config all` * more system:running-config : Shows configuration without masking passwords The more system:running is needed in some scenarios for instance if you want to create a site to site vpn connection (a tunnel-group in ASA) and include ike passwords. The modules support the argument show_command, however after the refactoring of the networking modules this command never gets used. ##### STEPS TO REPRODUCE I've run the tests from the [test-network-modules](https://github.com/ansible/test-network-modules) repo. Specifically the [more_system.yaml](https://github.com/ansible/test-network-modules/blob/devel/roles/test_asa_config/tests/cli/more_system.yaml) tests. ##### EXPECTED RESULTS The testrun should pass the idempotency test ##### ACTUAL RESULTS This is the output from the test run ``` TASK [test_asa_config : debug] ************************************************* ok: [ns2903-asa-02] => { "msg": "START cli/more_system.yaml" } TASK [test_asa_config : setup] ************************************************* changed: [ns2903-asa-02] TASK [test_asa_config : Prepare tunnel-group] ********************************** changed: [ns2903-asa-02] TASK [test_asa_config : Setup tunnel-group] ************************************ changed: [ns2903-asa-02] TASK [test_asa_config : Test idempotency] ************************************** changed: [ns2903-asa-02] TASK [test_asa_config : assert] ************************************************ fatal: [ns2903-asa-02]: FAILED! => { "assertion": "result.changed == false", "changed": false, "evaluated_to": false, "failed": true } ``` #### Errors In [asa.py](https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/asa.py) the param is available: ```Python add_argument('show_command', dict(default='show running-config', choices=['show running-config', 'more system:running-config'])) ``` However the function to get the configuration never uses this setting: ```Python def get_config(self, include_defaults=False): cmd = 'show running-config' if include_defaults: cmd += ' all' return self.run_commands(cmd)[0] ``` #### Possible solution If module_util was changed to something like this: ```Python def get_config(self, include_defaults=False, show_command='show running-config'): if show_command == 'show running-config' and include_defaults: show_command += ' all' return self.run_commands(show_command)[0] ``` And the modules were changed so that the param was sent while getting the configuration, and perhaps also used if the backup parameter was used.
main
broken show command for asa config and asa acl issue type bug report component name asa config asa acl ansible version ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file configured module search path default w o overrides configuration n a os environment n a summary the cisco asa devices supports two methods of displaying the current configuration show running config shows configuration but masks passwords supports adding all to display default configuration i e show running config all more system running config shows configuration without masking passwords the more system running is needed in some scenarios for instance if you want to create a site to site vpn connection a tunnel group in asa and include ike passwords the modules support the argument show command however after the refactoring of the networking modules this command never gets used steps to reproduce i ve run the tests from the repo specifically the tests expected results the testrun should pass the idempotency test actual results this is the output from the test run task ok msg start cli more system yaml task changed task changed task changed task changed task fatal failed assertion result changed false changed false evaluated to false failed true errors in the param is available python add argument show command dict default show running config choices however the function to get the configuration never uses this setting python def get config self include defaults false cmd show running config if include defaults cmd all return self run commands cmd possible solution if module util was changed to something like this python def get config self include defaults false show command show running config if show command show running config and include defaults show command all return self run commands show command and the modules were changed so that the param was sent while getting the configuration and perhaps also used if the backup parameter was used
1
1,864
6,577,486,937
IssuesEvent
2017-09-12 01:15:30
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Ability to ignore excludes in the yum module
affects_2.1 feature_idea waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Feature Idea ##### COMPONENT NAME <!--- Name of the plugin/module/task --> yum module ##### ANSIBLE VERSION ``` ansible 2.1.0 ``` ##### OS / ENVIRONMENT <!--- N/A --> ##### SUMMARY The ability to ignore yum excludes in the yum.conf. There are some situations where package upgrades which we normally block in the yum.conf should be ignored. In our case, our hosting provider regularly updates packages for security concerns and we need to control sensitive packages to prevent application issues.
True
Ability to ignore excludes in the yum module - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Feature Idea ##### COMPONENT NAME <!--- Name of the plugin/module/task --> yum module ##### ANSIBLE VERSION ``` ansible 2.1.0 ``` ##### OS / ENVIRONMENT <!--- N/A --> ##### SUMMARY The ability to ignore yum excludes in the yum.conf. There are some situations where package upgrades which we normally block in the yum.conf should be ignored. In our case, our hosting provider regularly updates packages for security concerns and we need to control sensitive packages to prevent application issues.
main
ability to ignore excludes in the yum module issue type feature idea component name yum module ansible version ansible os environment n a summary the ability to ignore yum excludes in the yum conf there are some situations where package upgrades which we normally block in the yum conf should be ignored in our case our hosting provider regularly updates packages for security concerns and we need to control sensitive packages to prevent application issues
1
1,851
6,577,396,140
IssuesEvent
2017-09-12 00:37:14
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Show actual status code for uri
affects_2.0 feature_idea waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME uri ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION none ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY When using uri and a status code other than 200 is returned, the actual status code returned is not reported. The message is "Status code was not [200]". Sometimes it is very difficult to determine the actual status code returned by other means. It would make debugging easier if the actual error code was included in the error message.
True
Show actual status code for uri - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME uri ##### ANSIBLE VERSION ``` ansible 2.0.2.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION none ##### OS / ENVIRONMENT Ubuntu 14.04 ##### SUMMARY When using uri and a status code other than 200 is returned, the actual status code returned is not reported. The message is "Status code was not [200]". Sometimes it is very difficult to determine the actual status code returned by other means. It would make debugging easier if the actual error code was included in the error message.
main
show actual status code for uri issue type feature idea component name uri ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration none os environment ubuntu summary when using uri and a status code other than is returned the actual status code returned is not reported the message is status code was not sometimes it is very difficult to determine the actual status code returned by other means it would make debugging easier if the actual error code was included in the error message
1
669,533
22,629,795,325
IssuesEvent
2022-06-30 13:48:14
catjacks38/FCC-GAN
https://api.github.com/repos/catjacks38/FCC-GAN
opened
implement a way to use training images of any set size
low priority
this sounds like a lot of effort to do, but maybe at somepoint i will implement this... or maybe someone else can. i would prefer the latter cause im kinda lazy. easier to merge a PR than actually make it
1.0
implement a way to use training images of any set size - this sounds like a lot of effort to do, but maybe at somepoint i will implement this... or maybe someone else can. i would prefer the latter cause im kinda lazy. easier to merge a PR than actually make it
non_main
implement a way to use training images of any set size this sounds like a lot of effort to do but maybe at somepoint i will implement this or maybe someone else can i would prefer the latter cause im kinda lazy easier to merge a pr than actually make it
0
71,394
13,652,438,133
IssuesEvent
2020-09-27 07:31:45
gupta-shrinath/Notes
https://api.github.com/repos/gupta-shrinath/Notes
closed
Change approach of bottom navigation implementation
code improvement help wanted
* Currently the bottom_navigation.dart has three lists `unSelectedItems` `selectedItems` `items` * The `items` list has home as selected item and `items` list is passed to items property of BottomNavigationBar * When onTap of BottomNavigationBar is called three things happen * `items` list tapped index is changed to have selected item from `selectedItems` list * the rest elements in `items` list is changed to have unselected items from `unSelectedItems` list * the current index of BottomNavigationBar is changed As you can see this is a naive approach and I need help in improving this code.
1.0
Change approach of bottom navigation implementation - * Currently the bottom_navigation.dart has three lists `unSelectedItems` `selectedItems` `items` * The `items` list has home as selected item and `items` list is passed to items property of BottomNavigationBar * When onTap of BottomNavigationBar is called three things happen * `items` list tapped index is changed to have selected item from `selectedItems` list * the rest elements in `items` list is changed to have unselected items from `unSelectedItems` list * the current index of BottomNavigationBar is changed As you can see this is a naive approach and I need help in improving this code.
non_main
change approach of bottom navigation implementation currently the bottom navigation dart has three lists unselecteditems selecteditems items the items list has home as selected item and items list is passed to items property of bottomnavigationbar when ontap of bottomnavigationbar is called three things happen items list tapped index is changed to have selected item from selecteditems list the rest elements in items list is changed to have unselected items from unselecteditems list the current index of bottomnavigationbar is changed as you can see this is a naive approach and i need help in improving this code
0
2,347
8,393,522,623
IssuesEvent
2018-10-09 20:48:29
citrusframework/citrus
https://api.github.com/repos/citrusframework/citrus
closed
404 error for todo-list and todo-app
Prio: Low READY Type: Maintainance
Hi, I received 404 error when clicked on todo-list and todo-app links available in https://github.com/citrusframework/citrus-samples/tree/master/samples-cucumber/sample-cucumber/java-dsl
True
404 error for todo-list and todo-app - Hi, I received 404 error when clicked on todo-list and todo-app links available in https://github.com/citrusframework/citrus-samples/tree/master/samples-cucumber/sample-cucumber/java-dsl
main
error for todo list and todo app hi i received error when clicked on todo list and todo app links available in
1
4,385
22,317,247,061
IssuesEvent
2022-06-14 00:09:15
cncf/glossary
https://api.github.com/repos/cncf/glossary
closed
Rename improper markdown filenames
maintainers
In 'How to Contribute', we guide contributors to set markdown filename with this rule: > no capitalization and no space, and .md at the end As a result of guidance and compliance of contributors, most of markdown filenames are being appropriate. But there exist some exceptions, so I suggest to rename those. I am suggesting this because our focus should be on awareness and getting people to use and **reference the terms** (borrowing @CathPag's words). [Example] - `TLS(Transport Layer Security).md` - ![image](https://user-images.githubusercontent.com/46767780/165647870-0d5d7f40-7d6d-4966-84e5-063e26c4946a.png) (which seems not great, IMO) - Suggestion: `transport-layer-security.md` --- [Suggestions] | As-is | To-be | Current URL | |---|---|---| | `TLS(Transport Layer Security).md` | `transport_layer_security.md` | https://glossary.cncf.io/tlstransport-layer-security/ | | `container-image.md` | `container_image.md` | https://glossary.cncf.io/container-image/ <br> For now this term is not displayed, since `status` is not `Completed` | | `mTLS (Mutual Transport Layer Security).md` | `mutual_transport_layer_security.md` | https://glossary.cncf.io/mtls-mutual-transport-layer-security/ | | `versioncontrol.md` | `version_control.md` | https://glossary.cncf.io/versioncontrol/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | --- [Caution] When updating filenames, one should also update the links toward those terms, and the changes should be propagated to each L10n directories/branches too.
True
Rename improper markdown filenames - In 'How to Contribute', we guide contributors to set markdown filename with this rule: > no capitalization and no space, and .md at the end As a result of guidance and compliance of contributors, most of markdown filenames are being appropriate. But there exist some exceptions, so I suggest to rename those. I am suggesting this because our focus should be on awareness and getting people to use and **reference the terms** (borrowing @CathPag's words). [Example] - `TLS(Transport Layer Security).md` - ![image](https://user-images.githubusercontent.com/46767780/165647870-0d5d7f40-7d6d-4966-84e5-063e26c4946a.png) (which seems not great, IMO) - Suggestion: `transport-layer-security.md` --- [Suggestions] | As-is | To-be | Current URL | |---|---|---| | `TLS(Transport Layer Security).md` | `transport_layer_security.md` | https://glossary.cncf.io/tlstransport-layer-security/ | | `container-image.md` | `container_image.md` | https://glossary.cncf.io/container-image/ <br> For now this term is not displayed, since `status` is not `Completed` | | `mTLS (Mutual Transport Layer Security).md` | `mutual_transport_layer_security.md` | https://glossary.cncf.io/mtls-mutual-transport-layer-security/ | | `versioncontrol.md` | `version_control.md` | https://glossary.cncf.io/versioncontrol/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | --- [Caution] When updating filenames, one should also update the links toward those terms, and the changes should be propagated to each L10n directories/branches too.
main
rename improper markdown filenames in how to contribute we guide contributors to set markdown filename with this rule no capitalization and no space and md at the end as a result of guidance and compliance of contributors most of markdown filenames are being appropriate but there exist some exceptions so i suggest to rename those i am suggesting this because our focus should be on awareness and getting people to use and reference the terms borrowing cathpag s words tls transport layer security md which seems not great imo suggestion transport layer security md as is to be current url tls transport layer security md transport layer security md container image md container image md for now this term is not displayed since status is not completed mtls mutual transport layer security md mutual transport layer security md versioncontrol md version control md when updating filenames one should also update the links toward those terms and the changes should be propagated to each directories branches too
1
653
4,164,930,744
IssuesEvent
2016-06-19 05:24:01
Homebrew/homebrew-core
https://api.github.com/repos/Homebrew/homebrew-core
closed
RFC: Cython formula
maintainer feedback python question
Cython is a tool that makes it easy to write C extensions for Python and Python bindings for C libraries. It's used by a handful of formulas (see below), which install it temporarily to buildpath, since it is not needed at runtime. Building Cython is relatively slow. It would be nice to have a formula we can bottle and use as a :build dependency. Does that sound reasonable? ``` tim@rocketman:homebrew (master)$ grep -ilR --exclude-dir .git cython . | cut -d/ -f2 | sort | uniq -c 8 homebrew-core 1 homebrew-games 3 homebrew-science ```
True
RFC: Cython formula - Cython is a tool that makes it easy to write C extensions for Python and Python bindings for C libraries. It's used by a handful of formulas (see below), which install it temporarily to buildpath, since it is not needed at runtime. Building Cython is relatively slow. It would be nice to have a formula we can bottle and use as a :build dependency. Does that sound reasonable? ``` tim@rocketman:homebrew (master)$ grep -ilR --exclude-dir .git cython . | cut -d/ -f2 | sort | uniq -c 8 homebrew-core 1 homebrew-games 3 homebrew-science ```
main
rfc cython formula cython is a tool that makes it easy to write c extensions for python and python bindings for c libraries it s used by a handful of formulas see below which install it temporarily to buildpath since it is not needed at runtime building cython is relatively slow it would be nice to have a formula we can bottle and use as a build dependency does that sound reasonable tim rocketman homebrew master grep ilr exclude dir git cython cut d sort uniq c homebrew core homebrew games homebrew science
1