Unnamed: 0
int64
1
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
3
438
labels
stringlengths
4
308
body
stringlengths
7
254k
index
stringclasses
7 values
text_combine
stringlengths
96
254k
label
stringclasses
2 values
text
stringlengths
96
246k
binary_label
int64
0
1
4,302
21,673,484,514
IssuesEvent
2022-05-08 10:37:29
RalfKoban/MiKo-Analyzers
https://api.github.com/repos/RalfKoban/MiKo-Analyzers
closed
Use 'switch ... return' instead of 'switch break'
feature Area: analyzer Area: maintainability
Often, code using switch statements can be simplified by simply returning inside the switch, instead of breaking and then returning. So we should report a warning. Example: ```c# public void DoSomething(int i) { var text = string.Empty; switch (i) { case 0: text = "Zero"; break; case 1: text = "One"; break; case 2: text = "Two"; break; } // do something with text } ``` Could be replaced with: ```c# public void DoSomething(int i) { var text = GetText(i); // do something with text } private static string GetText(int i) { switch (i) { case 0: return "Zero"; case 1: return "One"; case 2: return "Two"; default: return string.Empty; } } ```
True
Use 'switch ... return' instead of 'switch break' - Often, code using switch statements can be simplified by simply returning inside the switch, instead of breaking and then returning. So we should report a warning. Example: ```c# public void DoSomething(int i) { var text = string.Empty; switch (i) { case 0: text = "Zero"; break; case 1: text = "One"; break; case 2: text = "Two"; break; } // do something with text } ``` Could be replaced with: ```c# public void DoSomething(int i) { var text = GetText(i); // do something with text } private static string GetText(int i) { switch (i) { case 0: return "Zero"; case 1: return "One"; case 2: return "Two"; default: return string.Empty; } } ```
main
use switch return instead of switch break often code using switch statements can be simplified by simply returning inside the switch instead of breaking and then returning so we should report a warning example c public void dosomething int i var text string empty switch i case text zero break case text one break case text two break do something with text could be replaced with c public void dosomething int i var text gettext i do something with text private static string gettext int i switch i case return zero case return one case return two default return string empty
1
4,481
23,356,479,750
IssuesEvent
2022-08-10 07:54:33
coq/platform
https://api.github.com/repos/coq/platform
closed
Add itauto to the Coq Platform
kind: package inclusion approval: has maintainer agreement
[itauto](https://gitlab.inria.fr/fbesson/itauto) is a reflexive intuitionistic SAT solver parameterised by a theory module. When run inside Coq, the theory module wraps an arbitrary Coq tactic, e.g., the `lia` solver for linear arithmetic (`itauto lia`) or the `congruence` solver for uninterpreted function symbols and constructors (`itauto congruence`). The default invocation of `itauto` uses `auto` as a leaf tactic and is then a more stable alternative to `intuition`, which currently does `intuition (auto with *)`. Included is also a more experimental SMT-like tactic `smt` that does Nelson-Oppen combination of `lia` and `congruence`. Key parts of the implementation are described [in a research paper](https://drops.dagstuhl.de/opus/volltexte/2021/13904/pdf/LIPIcs-ITP-2021-9.pdf). To widen the range of tactics for automation to Coq users, I propose that itauto is included in the Coq Platform. The project has already had regular releases and opam packages since Coq 8.13, most recently for Coq 8.16 (https://github.com/coq/opam-coq-archive/pull/2206) and is part of Coq's CI. I've already got the go-ahead by email from the author of itauto to propose the Platform inclusion, but pinging him in here as well: @fajb
True
Add itauto to the Coq Platform - [itauto](https://gitlab.inria.fr/fbesson/itauto) is a reflexive intuitionistic SAT solver parameterised by a theory module. When run inside Coq, the theory module wraps an arbitrary Coq tactic, e.g., the `lia` solver for linear arithmetic (`itauto lia`) or the `congruence` solver for uninterpreted function symbols and constructors (`itauto congruence`). The default invocation of `itauto` uses `auto` as a leaf tactic and is then a more stable alternative to `intuition`, which currently does `intuition (auto with *)`. Included is also a more experimental SMT-like tactic `smt` that does Nelson-Oppen combination of `lia` and `congruence`. Key parts of the implementation are described [in a research paper](https://drops.dagstuhl.de/opus/volltexte/2021/13904/pdf/LIPIcs-ITP-2021-9.pdf). To widen the range of tactics for automation to Coq users, I propose that itauto is included in the Coq Platform. The project has already had regular releases and opam packages since Coq 8.13, most recently for Coq 8.16 (https://github.com/coq/opam-coq-archive/pull/2206) and is part of Coq's CI. I've already got the go-ahead by email from the author of itauto to propose the Platform inclusion, but pinging him in here as well: @fajb
main
add itauto to the coq platform is a reflexive intuitionistic sat solver parameterised by a theory module when run inside coq the theory module wraps an arbitrary coq tactic e g the lia solver for linear arithmetic itauto lia or the congruence solver for uninterpreted function symbols and constructors itauto congruence the default invocation of itauto uses auto as a leaf tactic and is then a more stable alternative to intuition which currently does intuition auto with included is also a more experimental smt like tactic smt that does nelson oppen combination of lia and congruence key parts of the implementation are described to widen the range of tactics for automation to coq users i propose that itauto is included in the coq platform the project has already had regular releases and opam packages since coq most recently for coq and is part of coq s ci i ve already got the go ahead by email from the author of itauto to propose the platform inclusion but pinging him in here as well fajb
1
23,445
10,885,669,378
IssuesEvent
2019-11-18 10:51:48
ckauhaus/nixpkgs
https://api.github.com/repos/ckauhaus/nixpkgs
closed
Vulnerability roundup 77: qemu-4.1.0: 1 advisory
1.severity: security
[search](https://search.nix.gsc.io/?q=qemu&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=qemu+in%3Apath&type=Code) * [ ] [CVE-2019-15890](https://nvd.nist.gov/vuln/detail/CVE-2019-15890) (nixos-unstable) Scanned versions: nixos-unstable: c1966522d7d. May contain false positives.
True
Vulnerability roundup 77: qemu-4.1.0: 1 advisory - [search](https://search.nix.gsc.io/?q=qemu&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=qemu+in%3Apath&type=Code) * [ ] [CVE-2019-15890](https://nvd.nist.gov/vuln/detail/CVE-2019-15890) (nixos-unstable) Scanned versions: nixos-unstable: c1966522d7d. May contain false positives.
non_main
vulnerability roundup qemu advisory nixos unstable scanned versions nixos unstable may contain false positives
0
5,818
30,792,528,200
IssuesEvent
2023-07-31 17:14:56
jupyter-naas/awesome-notebooks
https://api.github.com/repos/jupyter-naas/awesome-notebooks
closed
JSON - Explore Large JSON Files
templates maintainer
This notebook explores large JSON files using Python library. It is usefull for organizations to quickly analyze large JSON files.
True
JSON - Explore Large JSON Files - This notebook explores large JSON files using Python library. It is usefull for organizations to quickly analyze large JSON files.
main
json explore large json files this notebook explores large json files using python library it is usefull for organizations to quickly analyze large json files
1
919
4,622,130,203
IssuesEvent
2016-09-27 06:01:10
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
docker_image module doesn't work with local registry
affects_2.1 bug_report cloud docker waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> docker_image ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.0.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Ubuntu 16.04 LTS ##### SUMMARY I installed my own local registry on localhost 5000: ```` docker run -d -p 5000:5000 --restart=always registry:2 ``` and when triggering the docker_image module I get: ``` fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Error: configuration for localhost:5000 not found. Try logging into localhost:5000 first."} ``` I am able to push an image to this registry using docker push from the command line. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> I use the docker_image module in the following way, in a basic playbook: Tag the image: ``` docker tag my_image localhost:5000/my_image ``` Use the docker_image module: ``` - name: Try local registry docker_image: path: "{{my_image_dir}}" name: localhost:5000/my_image force: true state: present ``` I do curl ``` http://localhost:5000/v2/_catalog ``` and this returns fine, so the registry works at localhost:5000 but the image doesn't get pushed. ##### EXPECTED RESULTS Get the image to the local registry. ##### ACTUAL RESULTS ``` fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Error: configuration for localhost:5000 not found. Try logging into localhost:5000 first."} ```
True
docker_image module doesn't work with local registry - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> docker_image ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.0.0 ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Ubuntu 16.04 LTS ##### SUMMARY I installed my own local registry on localhost 5000: ```` docker run -d -p 5000:5000 --restart=always registry:2 ``` and when triggering the docker_image module I get: ``` fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Error: configuration for localhost:5000 not found. Try logging into localhost:5000 first."} ``` I am able to push an image to this registry using docker push from the command line. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> I use the docker_image module in the following way, in a basic playbook: Tag the image: ``` docker tag my_image localhost:5000/my_image ``` Use the docker_image module: ``` - name: Try local registry docker_image: path: "{{my_image_dir}}" name: localhost:5000/my_image force: true state: present ``` I do curl ``` http://localhost:5000/v2/_catalog ``` and this returns fine, so the registry works at localhost:5000 but the image doesn't get pushed. ##### EXPECTED RESULTS Get the image to the local registry. ##### ACTUAL RESULTS ``` fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Error: configuration for localhost:5000 not found. Try logging into localhost:5000 first."} ```
main
docker image module doesn t work with local registry issue type bug report component name docker image ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu lts summary i installed my own local registry on localhost docker run d p restart always registry and when triggering the docker image module i get fatal failed changed false failed true msg error configuration for localhost not found try logging into localhost first i am able to push an image to this registry using docker push from the command line steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i use the docker image module in the following way in a basic playbook tag the image docker tag my image localhost my image use the docker image module name try local registry docker image path my image dir name localhost my image force true state present i do curl and this returns fine so the registry works at localhost but the image doesn t get pushed expected results get the image to the local registry actual results fatal failed changed false failed true msg error configuration for localhost not found try logging into localhost first
1
3,320
12,879,837,334
IssuesEvent
2020-07-12 01:10:45
aar3/tik
https://api.github.com/repos/aar3/tik
closed
refactor bar navigation
maintainability refactor
#### Issue Right now both tab navigation bars are being rendered on per-view basis. Ideally we want them to a component of the navigation and not the views via the use of react-native-router-flux's `Tab` abstractions and such - [Example on Medium.com](https://medium.com/@iakash1195/custom-tabbed-navigation-in-react-native-router-flux-47d1ad12ce2e) #### Acceptance Criteria When all view-based tab bar navigation has been swapped out for react-native-router-flux components
True
refactor bar navigation - #### Issue Right now both tab navigation bars are being rendered on per-view basis. Ideally we want them to a component of the navigation and not the views via the use of react-native-router-flux's `Tab` abstractions and such - [Example on Medium.com](https://medium.com/@iakash1195/custom-tabbed-navigation-in-react-native-router-flux-47d1ad12ce2e) #### Acceptance Criteria When all view-based tab bar navigation has been swapped out for react-native-router-flux components
main
refactor bar navigation issue right now both tab navigation bars are being rendered on per view basis ideally we want them to a component of the navigation and not the views via the use of react native router flux s tab abstractions and such acceptance criteria when all view based tab bar navigation has been swapped out for react native router flux components
1
72,345
15,225,427,686
IssuesEvent
2021-02-18 07:17:41
devikab2b/whites5
https://api.github.com/repos/devikab2b/whites5
closed
CVE-2013-4002 (High) detected in xercesImpl-2.9.1.jar - autoclosed
security vulnerability
## CVE-2013-4002 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xercesImpl-2.9.1.jar</b></p></summary> <p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI), a complete framework for building parser components and configurations that is extremely modular and easy to program.</p> <p>Path to dependency file: whites5/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/xerces/xercesImpl/2.9.1/xercesImpl-2.9.1.jar</p> <p> Dependency Hierarchy: - spark-core_2.12-2.4.7.jar (Root Library) - hadoop-client-2.6.5.jar - hadoop-hdfs-2.6.5.jar - :x: **xercesImpl-2.9.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/devikab2b/whites5/commit/b24afaf70d8746f42dcb93a7ef65ad261fda5b7f">b24afaf70d8746f42dcb93a7ef65ad261fda5b7f</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and earlier, and possibly other products allows remote attackers to cause a denial of service via vectors related to XML attribute names. <p>Publish Date: 2013-07-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-4002>CVE-2013-4002</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.1</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002</a></p> <p>Release Date: 2013-07-23</p> <p>Fix Resolution: xerces:xercesImpl:Xerces-J_2_12_0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2013-4002 (High) detected in xercesImpl-2.9.1.jar - autoclosed - ## CVE-2013-4002 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xercesImpl-2.9.1.jar</b></p></summary> <p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI), a complete framework for building parser components and configurations that is extremely modular and easy to program.</p> <p>Path to dependency file: whites5/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/xerces/xercesImpl/2.9.1/xercesImpl-2.9.1.jar</p> <p> Dependency Hierarchy: - spark-core_2.12-2.4.7.jar (Root Library) - hadoop-client-2.6.5.jar - hadoop-hdfs-2.6.5.jar - :x: **xercesImpl-2.9.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/devikab2b/whites5/commit/b24afaf70d8746f42dcb93a7ef65ad261fda5b7f">b24afaf70d8746f42dcb93a7ef65ad261fda5b7f</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and earlier, and possibly other products allows remote attackers to cause a denial of service via vectors related to XML attribute names. <p>Publish Date: 2013-07-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-4002>CVE-2013-4002</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.1</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002</a></p> <p>Release Date: 2013-07-23</p> <p>Fix Resolution: xerces:xercesImpl:Xerces-J_2_12_0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in xercesimpl jar autoclosed cve high severity vulnerability vulnerable library xercesimpl jar is the next generation of high performance fully compliant xml parsers in the apache xerces family this new version of xerces introduces the xerces native interface xni a complete framework for building parser components and configurations that is extremely modular and easy to program path to dependency file pom xml path to vulnerable library home wss scanner repository xerces xercesimpl xercesimpl jar dependency hierarchy spark core jar root library hadoop client jar hadoop hdfs jar x xercesimpl jar vulnerable library found in head commit a href found in base branch main vulnerability details xmlscanner java in apache java parser before as used in the java runtime environment jre in ibm java before before before and before as well as oracle java se and earlier java se and earlier java se and earlier jrockit and earlier jrockit and earlier java se embedded and earlier and possibly other products allows remote attackers to cause a denial of service via vectors related to xml attribute names publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution xerces xercesimpl xerces j step up your open source security game with whitesource
0
4,790
24,646,837,526
IssuesEvent
2022-10-17 15:25:19
BioArchLinux/Packages
https://api.github.com/repos/BioArchLinux/Packages
closed
[MAINTAIN] v8
maintain
<!-- Please report the error of one package in one issue! Use multi issues to report multi bugs. Thanks! --> **Log of the bug** <details> https://build.bioarchlinux.org/api/pkg/v8/log/1665752214 </details> **Packages (please complete the following information):** - Package Name: v8 **Description** @dvdesolve Could you help fix it
True
[MAINTAIN] v8 - <!-- Please report the error of one package in one issue! Use multi issues to report multi bugs. Thanks! --> **Log of the bug** <details> https://build.bioarchlinux.org/api/pkg/v8/log/1665752214 </details> **Packages (please complete the following information):** - Package Name: v8 **Description** @dvdesolve Could you help fix it
main
please report the error of one package in one issue use multi issues to report multi bugs thanks log of the bug packages please complete the following information package name description dvdesolve could you help fix it
1
1,813
6,577,312,022
IssuesEvent
2017-09-12 00:01:58
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
FR: yum module should support sub commands such as 'cache'
affects_2.1 feature_idea waiting_on_maintainer
##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible-playbook 2.1.0.0 config file = /Users/g.lynch/git/tos/ansible_role_yum/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT CentOS 6.7, 6.8, 7.x ##### SUMMARY In order to flush/make the cache when _not_ installing a package, it's currently necessary to call yum directly using `command` or `shell`. This results in a `[WARNING]: consider using yum module rather than running yum`. The warning is rather annoying in this situation. ##### STEPS TO REPRODUCE Create a role to manage yum repositories - use yum_repositories - finish with flushing the cache, currently have to use command/shell e.g. ``` --- - name: configure yum template: src: "{{ yum_config|basename }}.j2" dest: "{{ yum_config }}" - name: configure all yum repositories yum_repository: baseurl: "{{ item.baseurl|default(omit) }}" description: "{{ item.description|default('The ' + item.name + ' repository') }}" enabled: "{{ item.enabled|default(True) }}" gpgcakey: "{{ item.gpgcakey|default(omit) }}" gpgcheck: "{{ item.gpgcheck|default(False) }}" gpgkey: "{{ item.gpgkey|default(omit) }}" mirrorlist: "{{ item.mirrorlist|default(omit) }}" name: "{{ item.name }}" state: "{{ item.state|default('present') }}" with_flattened: - "{{ yum_repos_base }}" - "{{ yum_repos_apps }}" # NOTE: currently the yum module does not support cache actions by themselves - name: clean yum cache shell: yum clean all when: yum_clean_all|bool ``` ##### EXPECTED RESULTS Ability to use yum module with cache command handling. ##### ACTUAL RESULTS ``` TASK [ansible_role_yum : clean yum cache] ************************************** changed: [default] [WARNING]: Consider using yum module rather than running yum ```
True
FR: yum module should support sub commands such as 'cache' - ##### ISSUE TYPE - Feature Idea ##### COMPONENT NAME yum ##### ANSIBLE VERSION ``` ansible-playbook 2.1.0.0 config file = /Users/g.lynch/git/tos/ansible_role_yum/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION ##### OS / ENVIRONMENT CentOS 6.7, 6.8, 7.x ##### SUMMARY In order to flush/make the cache when _not_ installing a package, it's currently necessary to call yum directly using `command` or `shell`. This results in a `[WARNING]: consider using yum module rather than running yum`. The warning is rather annoying in this situation. ##### STEPS TO REPRODUCE Create a role to manage yum repositories - use yum_repositories - finish with flushing the cache, currently have to use command/shell e.g. ``` --- - name: configure yum template: src: "{{ yum_config|basename }}.j2" dest: "{{ yum_config }}" - name: configure all yum repositories yum_repository: baseurl: "{{ item.baseurl|default(omit) }}" description: "{{ item.description|default('The ' + item.name + ' repository') }}" enabled: "{{ item.enabled|default(True) }}" gpgcakey: "{{ item.gpgcakey|default(omit) }}" gpgcheck: "{{ item.gpgcheck|default(False) }}" gpgkey: "{{ item.gpgkey|default(omit) }}" mirrorlist: "{{ item.mirrorlist|default(omit) }}" name: "{{ item.name }}" state: "{{ item.state|default('present') }}" with_flattened: - "{{ yum_repos_base }}" - "{{ yum_repos_apps }}" # NOTE: currently the yum module does not support cache actions by themselves - name: clean yum cache shell: yum clean all when: yum_clean_all|bool ``` ##### EXPECTED RESULTS Ability to use yum module with cache command handling. ##### ACTUAL RESULTS ``` TASK [ansible_role_yum : clean yum cache] ************************************** changed: [default] [WARNING]: Consider using yum module rather than running yum ```
main
fr yum module should support sub commands such as cache issue type feature idea component name yum ansible version ansible playbook config file users g lynch git tos ansible role yum ansible cfg configured module search path default w o overrides configuration os environment centos x summary in order to flush make the cache when not installing a package it s currently necessary to call yum directly using command or shell this results in a consider using yum module rather than running yum the warning is rather annoying in this situation steps to reproduce create a role to manage yum repositories use yum repositories finish with flushing the cache currently have to use command shell e g name configure yum template src yum config basename dest yum config name configure all yum repositories yum repository baseurl item baseurl default omit description item description default the item name repository enabled item enabled default true gpgcakey item gpgcakey default omit gpgcheck item gpgcheck default false gpgkey item gpgkey default omit mirrorlist item mirrorlist default omit name item name state item state default present with flattened yum repos base yum repos apps note currently the yum module does not support cache actions by themselves name clean yum cache shell yum clean all when yum clean all bool expected results ability to use yum module with cache command handling actual results task changed consider using yum module rather than running yum
1
217
2,873,265,342
IssuesEvent
2015-06-08 16:08:55
github/hubot-scripts
https://api.github.com/repos/github/hubot-scripts
closed
Problem with msg.http (in teamcity plugin)
needs-maintainer
The teamcity plugin doesn't work for me. When a method is called, I can send debug messages up to the msg.http().headers().get() method, and it doesn't call the callback after this. Since my teamcity server is using https, and a curl request using the same headers and url would not work without the -k or --insecure option (and works fine with -k), I'm wondering if there is an SSL certificate verification somewhere in msg.http.get ? Or do you have another idea? How would you debug this? Thanks for your help in advance Cordially
True
Problem with msg.http (in teamcity plugin) - The teamcity plugin doesn't work for me. When a method is called, I can send debug messages up to the msg.http().headers().get() method, and it doesn't call the callback after this. Since my teamcity server is using https, and a curl request using the same headers and url would not work without the -k or --insecure option (and works fine with -k), I'm wondering if there is an SSL certificate verification somewhere in msg.http.get ? Or do you have another idea? How would you debug this? Thanks for your help in advance Cordially
main
problem with msg http in teamcity plugin the teamcity plugin doesn t work for me when a method is called i can send debug messages up to the msg http headers get method and it doesn t call the callback after this since my teamcity server is using https and a curl request using the same headers and url would not work without the k or insecure option and works fine with k i m wondering if there is an ssl certificate verification somewhere in msg http get or do you have another idea how would you debug this thanks for your help in advance cordially
1
1,984
6,694,207,737
IssuesEvent
2017-10-10 00:19:38
duckduckgo/zeroclickinfo-spice
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
closed
Whois: IA not displaying
Internal Maintainer Input Requested
According to their API response, we've run out of credits: https://beta.duckduckgo.com/js/spice/whois/duckduckgo.com We also don't seem to be using caching, hence how we used up all our credits. --- IA Page: http://duck.co/ia/view/whois [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @b1ake
True
Whois: IA not displaying - According to their API response, we've run out of credits: https://beta.duckduckgo.com/js/spice/whois/duckduckgo.com We also don't seem to be using caching, hence how we used up all our credits. --- IA Page: http://duck.co/ia/view/whois [Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @b1ake
main
whois ia not displaying according to their api response we ve run out of credits we also don t seem to be using caching hence how we used up all our credits ia page
1
41,514
12,832,342,592
IssuesEvent
2020-07-07 07:29:55
rvvergara/todolist-react-version
https://api.github.com/repos/rvvergara/todolist-react-version
closed
CVE-2019-15657 (High) detected in eslint-utils-1.3.1.tgz
security vulnerability
## CVE-2019-15657 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-utils-1.3.1.tgz</b></p></summary> <p>Utilities for ESLint plugins.</p> <p>Library home page: <a href="https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz">https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/todolist-react-version/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/todolist-react-version/node_modules/eslint-utils/package.json</p> <p> Dependency Hierarchy: - eslint-5.16.0.tgz (Root Library) - :x: **eslint-utils-1.3.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/rvvergara/todolist-react-version/commit/85fba0e7c02424e61ae0ebd7a786b50a67132bf3">85fba0e7c02424e61ae0ebd7a786b50a67132bf3</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In eslint-utils before 1.4.1, the getStaticValue function can execute arbitrary code. <p>Publish Date: 2019-08-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15657>CVE-2019-15657</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657</a></p> <p>Release Date: 2019-08-26</p> <p>Fix Resolution: eslint-utils - 1.4.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-15657 (High) detected in eslint-utils-1.3.1.tgz - ## CVE-2019-15657 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-utils-1.3.1.tgz</b></p></summary> <p>Utilities for ESLint plugins.</p> <p>Library home page: <a href="https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz">https://registry.npmjs.org/eslint-utils/-/eslint-utils-1.3.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/todolist-react-version/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/todolist-react-version/node_modules/eslint-utils/package.json</p> <p> Dependency Hierarchy: - eslint-5.16.0.tgz (Root Library) - :x: **eslint-utils-1.3.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/rvvergara/todolist-react-version/commit/85fba0e7c02424e61ae0ebd7a786b50a67132bf3">85fba0e7c02424e61ae0ebd7a786b50a67132bf3</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In eslint-utils before 1.4.1, the getStaticValue function can execute arbitrary code. <p>Publish Date: 2019-08-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15657>CVE-2019-15657</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-15657</a></p> <p>Release Date: 2019-08-26</p> <p>Fix Resolution: eslint-utils - 1.4.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in eslint utils tgz cve high severity vulnerability vulnerable library eslint utils tgz utilities for eslint plugins library home page a href path to dependency file tmp ws scm todolist react version package json path to vulnerable library tmp ws scm todolist react version node modules eslint utils package json dependency hierarchy eslint tgz root library x eslint utils tgz vulnerable library found in head commit a href vulnerability details in eslint utils before the getstaticvalue function can execute arbitrary code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution eslint utils step up your open source security game with whitesource
0
8,484
11,945,684,719
IssuesEvent
2020-04-03 06:28:47
CMPUT301W20T07/arrival
https://api.github.com/repos/CMPUT301W20T07/arrival
closed
US 06.01.01 - Driver Accepted Requests Offline
Requirement low risk
**Focus:** Offline behaviour **User Story:** As a driver, I want to see requests that I already accepted while offline. **Rationale:** - To allow drivers to keep track of their accepted requests even if the app is closed **Story Points:** 13 **Risk Level:** Low **Testing:**
1.0
US 06.01.01 - Driver Accepted Requests Offline - **Focus:** Offline behaviour **User Story:** As a driver, I want to see requests that I already accepted while offline. **Rationale:** - To allow drivers to keep track of their accepted requests even if the app is closed **Story Points:** 13 **Risk Level:** Low **Testing:**
non_main
us driver accepted requests offline focus offline behaviour user story as a driver i want to see requests that i already accepted while offline rationale to allow drivers to keep track of their accepted requests even if the app is closed story points risk level low testing
0
138,261
30,839,774,713
IssuesEvent
2023-08-02 09:51:21
SambhaviPD/your-recipebuddy
https://api.github.com/repos/SambhaviPD/your-recipebuddy
closed
Write a common method that uses OpenAI's API by sending appropriate prompt as an input
code-refactoring backend-development
Random Recipe, Recipe by Cuisine, Recipe by Ingredients, Recipe by Meal course - The only difference between all these menu options are the prompts with appropriate inputs, the calling logic remains the same. Hence we need to write a common method to invoke the actual API.
1.0
Write a common method that uses OpenAI's API by sending appropriate prompt as an input - Random Recipe, Recipe by Cuisine, Recipe by Ingredients, Recipe by Meal course - The only difference between all these menu options are the prompts with appropriate inputs, the calling logic remains the same. Hence we need to write a common method to invoke the actual API.
non_main
write a common method that uses openai s api by sending appropriate prompt as an input random recipe recipe by cuisine recipe by ingredients recipe by meal course the only difference between all these menu options are the prompts with appropriate inputs the calling logic remains the same hence we need to write a common method to invoke the actual api
0
5,307
26,800,974,093
IssuesEvent
2023-02-01 15:05:41
makubacki/mu_devops
https://api.github.com/repos/makubacki/mu_devops
closed
[Bug]: Test
state:needs-triage state:needs-maintainer-feedback type:bug urgency:medium
### Is there an existing issue for this? - [X] I have searched existing issues ### Current Behavior Test ### Expected Behavior Test ### Steps To Reproduce Test ### Build Environment ```markdown - OS(s): Test - Tool Chain(s): Test - Targets Impacted: Test ``` ### Version Information ```text Test ``` ### Urgency Medium ### Are you going to fix this? I will fix it ### Do you need maintainer feedback? Maintainer feedback requested ### Anything else? _No response_
True
[Bug]: Test - ### Is there an existing issue for this? - [X] I have searched existing issues ### Current Behavior Test ### Expected Behavior Test ### Steps To Reproduce Test ### Build Environment ```markdown - OS(s): Test - Tool Chain(s): Test - Targets Impacted: Test ``` ### Version Information ```text Test ``` ### Urgency Medium ### Are you going to fix this? I will fix it ### Do you need maintainer feedback? Maintainer feedback requested ### Anything else? _No response_
main
test is there an existing issue for this i have searched existing issues current behavior test expected behavior test steps to reproduce test build environment markdown os s test tool chain s test targets impacted test version information text test urgency medium are you going to fix this i will fix it do you need maintainer feedback maintainer feedback requested anything else no response
1
3,924
17,649,549,034
IssuesEvent
2021-08-20 11:13:47
centerofci/mathesar
https://api.github.com/repos/centerofci/mathesar
closed
Storybook does not resolve paths from TS config
type: bug work: frontend status: ready restricted: maintainers
## Description The TypeScript config file `tsconfig.json` defines `paths` as follows: ```json "paths": { "@mathesar/*": ["src/*"], "@mathesar-components-dir/*": ["src/components/*"], "@mathesar-components/types": ["src/components/types.d.ts"], "@mathesar-components": ["src/components/index.ts"] } ``` which allows us to import using a much cleaner syntax: ```ts import { portal } from '@mathesar-components'; ``` These paths are not resolved by Storybook and thus stories written for components that use such imports do not compile. ## Expected behavior Storybook should use the same TS config and resolve paths. ## To Reproduce <!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. --> 0. Add the `.svelte` extension to `src/components/modal/__meta__/Modal.stories`. 0. Run Storybook. ## Additional context Aliases set at the bundler level can be configured for Storybook using the [`webpackFinal` field](https://storybook.js.org/docs/svelte/configure/webpack#extending-storybooks-webpack-config) or [`viteFinal` field](https://github.com/eirslett/storybook-builder-vite#customize-vite-config) in `main.js` but since these paths are set at the TypeScript level instead, I'm not sure what needs to be done.
True
Storybook does not resolve paths from TS config - ## Description The TypeScript config file `tsconfig.json` defines `paths` as follows: ```json "paths": { "@mathesar/*": ["src/*"], "@mathesar-components-dir/*": ["src/components/*"], "@mathesar-components/types": ["src/components/types.d.ts"], "@mathesar-components": ["src/components/index.ts"] } ``` which allows us to import using a much cleaner syntax: ```ts import { portal } from '@mathesar-components'; ``` These paths are not resolved by Storybook and thus stories written for components that use such imports do not compile. ## Expected behavior Storybook should use the same TS config and resolve paths. ## To Reproduce <!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. --> 0. Add the `.svelte` extension to `src/components/modal/__meta__/Modal.stories`. 0. Run Storybook. ## Additional context Aliases set at the bundler level can be configured for Storybook using the [`webpackFinal` field](https://storybook.js.org/docs/svelte/configure/webpack#extending-storybooks-webpack-config) or [`viteFinal` field](https://github.com/eirslett/storybook-builder-vite#customize-vite-config) in `main.js` but since these paths are set at the TypeScript level instead, I'm not sure what needs to be done.
main
storybook does not resolve paths from ts config description the typescript config file tsconfig json defines paths as follows json paths mathesar mathesar components dir mathesar components types mathesar components which allows us to import using a much cleaner syntax ts import portal from mathesar components these paths are not resolved by storybook and thus stories written for components that use such imports do not compile expected behavior storybook should use the same ts config and resolve paths to reproduce add the svelte extension to src components modal meta modal stories run storybook additional context aliases set at the bundler level can be configured for storybook using the or in main js but since these paths are set at the typescript level instead i m not sure what needs to be done
1
718
4,309,544,919
IssuesEvent
2016-07-21 16:20:03
CESNET/libnetconf
https://api.github.com/repos/CESNET/libnetconf
closed
libnetconf data model
auto-migrated Maintainability
``` Feature description: Create YANG data model implemented by the library with its own enhancements. As first, libnetconf now generates notifications on datastore lock/unlock. The notification is under "urn:cesnet:params:xml:ns:libnetconf:notifications " namespace. The data model will be announced by the libnetconf-based servers in <hello> messages. ``` Original issue reported on code.google.com by `rkre...@cesnet.cz` on 27 May 2014 at 12:03
True
libnetconf data model - ``` Feature description: Create YANG data model implemented by the library with its own enhancements. As first, libnetconf now generates notifications on datastore lock/unlock. The notification is under "urn:cesnet:params:xml:ns:libnetconf:notifications " namespace. The data model will be announced by the libnetconf-based servers in <hello> messages. ``` Original issue reported on code.google.com by `rkre...@cesnet.cz` on 27 May 2014 at 12:03
main
libnetconf data model feature description create yang data model implemented by the library with its own enhancements as first libnetconf now generates notifications on datastore lock unlock the notification is under urn cesnet params xml ns libnetconf notifications namespace the data model will be announced by the libnetconf based servers in messages original issue reported on code google com by rkre cesnet cz on may at
1
387,078
26,712,045,444
IssuesEvent
2023-01-28 02:20:28
xKabbe/denigma
https://api.github.com/repos/xKabbe/denigma
opened
:dna: [FEATURE] - Rework `EnzymeSettings` To Support Adding/Removing Specific Enzymes :dna:
documentation enhancement frontend JavaScript / TypeScript refactor test
# Description At the moment I have just added `10` sample enzymes to the `EnzymeSettings` component. But this has several disadvantages. It is currently only possible for the user to display specific enzymes within this predefined selection. However, this selection should be customizable by the user. So it should be possible to remove or add specific enzymes. With the current approach in the `SideBar` component, however, this would lead to problems if there were more than 10 enzymes, as the space required for the buttons would then be lacking. Here you could now see 2 options: * Either you set the user a `limit of 5-10 enzymes`, which he can add as options (+ display how many more can be added) * Or the current representation via buttons must be changed to a kind of list representation # Expected Actions - [ ] Think about a new approach for the `EnzymeSettings` component - [ ] Adjust the component and add the corresponding functionalities - [ ] Add the possibility to adjust the current enzyme list by adding and removing specific enzyme - Add -> A simple form for the enzyme name and obligatory data, e.g. recognition sequence and cut indexes (begin/end). Maybe also some data about the origin (e.g. *Escherichia coli*) that could be displayed via tooltip - Remove -> Either single deletion (e.g. hover section where a deletion button slides into the view as it is the case when deleting photos on an iPhone) or multiple selections via a selection list - [ ] If not already implemented, add functionality of enzyme selection in regards to the `SeqViz` component # Definition of Done - [ ] `Code` implemented - [ ] `Tests` implemented and passing - [ ] `Documentation` / `Stories` implemented - [ ] `GitHub Actions` are passing
1.0
:dna: [FEATURE] - Rework `EnzymeSettings` To Support Adding/Removing Specific Enzymes :dna: - # Description At the moment I have just added `10` sample enzymes to the `EnzymeSettings` component. But this has several disadvantages. It is currently only possible for the user to display specific enzymes within this predefined selection. However, this selection should be customizable by the user. So it should be possible to remove or add specific enzymes. With the current approach in the `SideBar` component, however, this would lead to problems if there were more than 10 enzymes, as the space required for the buttons would then be lacking. Here you could now see 2 options: * Either you set the user a `limit of 5-10 enzymes`, which he can add as options (+ display how many more can be added) * Or the current representation via buttons must be changed to a kind of list representation # Expected Actions - [ ] Think about a new approach for the `EnzymeSettings` component - [ ] Adjust the component and add the corresponding functionalities - [ ] Add the possibility to adjust the current enzyme list by adding and removing specific enzyme - Add -> A simple form for the enzyme name and obligatory data, e.g. recognition sequence and cut indexes (begin/end). Maybe also some data about the origin (e.g. *Escherichia coli*) that could be displayed via tooltip - Remove -> Either single deletion (e.g. hover section where a deletion button slides into the view as it is the case when deleting photos on an iPhone) or multiple selections via a selection list - [ ] If not already implemented, add functionality of enzyme selection in regards to the `SeqViz` component # Definition of Done - [ ] `Code` implemented - [ ] `Tests` implemented and passing - [ ] `Documentation` / `Stories` implemented - [ ] `GitHub Actions` are passing
non_main
dna rework enzymesettings to support adding removing specific enzymes dna description at the moment i have just added sample enzymes to the enzymesettings component but this has several disadvantages it is currently only possible for the user to display specific enzymes within this predefined selection however this selection should be customizable by the user so it should be possible to remove or add specific enzymes with the current approach in the sidebar component however this would lead to problems if there were more than enzymes as the space required for the buttons would then be lacking here you could now see options either you set the user a limit of enzymes which he can add as options display how many more can be added or the current representation via buttons must be changed to a kind of list representation expected actions think about a new approach for the enzymesettings component adjust the component and add the corresponding functionalities add the possibility to adjust the current enzyme list by adding and removing specific enzyme add a simple form for the enzyme name and obligatory data e g recognition sequence and cut indexes begin end maybe also some data about the origin e g escherichia coli that could be displayed via tooltip remove either single deletion e g hover section where a deletion button slides into the view as it is the case when deleting photos on an iphone or multiple selections via a selection list if not already implemented add functionality of enzyme selection in regards to the seqviz component definition of done code implemented tests implemented and passing documentation stories implemented github actions are passing
0
1,145
5,004,826,441
IssuesEvent
2016-12-12 08:27:55
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
win_regedit: change detection doesn't work for environment variables in registry keys of type expandstring
affects_2.1 bug_report waiting_on_maintainer windows
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> win_regedit ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> ##### SUMMARY <!--- Explain the problem briefly --> Setting an environment variable as data of an expandstring data type leads to a change every run. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` - name: set an environment variable as data win_regedit: key: "HKCU:\\Environment" value: HOME data: "%USERPROFILE%" type: expandstring ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> No change if the registry contains `%USERPROFILE%.` ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> When win_regedit is reading the registry key, it gets the expanded data instead of the real data. E.g. `C:\Users\<username>` instead of `%USERPROFILE%` <!--- Paste verbatim command output between quotes below -->
True
win_regedit: change detection doesn't work for environment variables in registry keys of type expandstring - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> win_regedit ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.1.0.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> ##### SUMMARY <!--- Explain the problem briefly --> Setting an environment variable as data of an expandstring data type leads to a change every run. ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> <!--- Paste example playbooks or commands between quotes below --> ``` - name: set an environment variable as data win_regedit: key: "HKCU:\\Environment" value: HOME data: "%USERPROFILE%" type: expandstring ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> No change if the registry contains `%USERPROFILE%.` ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> When win_regedit is reading the registry key, it gets the expanded data instead of the real data. E.g. `C:\Users\<username>` instead of `%USERPROFILE%` <!--- Paste verbatim command output between quotes below -->
main
win regedit change detection doesn t work for environment variables in registry keys of type expandstring issue type bug report component name win regedit ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary setting an environment variable as data of an expandstring data type leads to a change every run steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name set an environment variable as data win regedit key hkcu environment value home data userprofile type expandstring expected results no change if the registry contains userprofile actual results when win regedit is reading the registry key it gets the expanded data instead of the real data e g c users instead of userprofile
1
1,730
6,574,837,726
IssuesEvent
2017-09-11 14:14:40
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
ipv4_secondaries displays duplicate information
affects_2.2 bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> setup ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /home/vagrant/ansible/ansible.cfg configured module search path = ['/home/vagrant/ansible/library'] ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> # Enabled smart gathering gathering: smart ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Ubuntu 16.04.1 ##### SUMMARY <!--- Explain the problem briefly --> ipv4_secondaries displays duplicate address information ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> Run ansible -m setup hostname.foo -a "filter=ansible_eth1" Receive a filtered response with eth1. Here is example of secondaries "ipv4_secondaries": [ { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" }, { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" } ], Information is repeated <!--- Paste example playbooks or commands between quotes below --> ``` ansible -m setup hostname.foo -a "filter=ansible_eth1" ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> ... ``` "ipv4_secondaries": [ { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" }, ], ... ``` ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> Received ``` ... "ipv4_secondaries": [ { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" }, { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" } ], ... ``` Posting the full verbose output ``` Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root <10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 10.10.10.83 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234 `" && echo ansible-tmp-1477526333.03-252143209167234="` echo $HOME/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234 `" ) && sleep 0'"'"'' <10.10.10.83> PUT /tmp/tmpZmW3aJ TO /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py <10.10.10.83> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[10.10.10.83]' <10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root <10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 10.10.10.83 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/ /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py && sleep 0'"'"'' <10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root <10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.10.10.83 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/" > /dev/null 2>&1 && sleep 0'"'"'' voippbx.xcastlabs.com | SUCCESS => { "ansible_facts": { "ansible_eth1": { "active": true, "device": "eth1", "features": { "busy_poll": "on [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]" }, "ipv4": { "address": "75.145.154.230", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" }, "ipv4_secondaries": [ { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" }, { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" } ], "ipv6": [ { "address": "fe80::5c1c:e5ff:fe35:7c81", "prefix": "64", "scope": "link" } ], "macaddress": "5e:1c:e5:35:7c:81", "module": "virtio_net", "mtu": 1500, "pciid": "virtio4", "promisc": false, "type": "ether" } }, "changed": false, "invocation": { "module_args": { "fact_path": "/etc/ansible/facts.d", "filter": "ansible_eth1", "gather_subset": [ "all" ], "gather_timeout": 10 }, "module_name": "setup" } } ```
True
ipv4_secondaries displays duplicate information - <!--- Verify first that your issue/request is not already reported in GitHub --> ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> setup ##### ANSIBLE VERSION <!--- Paste verbatim output from “ansible --version” between quotes below --> ``` ansible 2.2.0.0 config file = /home/vagrant/ansible/ansible.cfg configured module search path = ['/home/vagrant/ansible/library'] ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> # Enabled smart gathering gathering: smart ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say “N/A” for anything that is not platform-specific. --> Ubuntu 16.04.1 ##### SUMMARY <!--- Explain the problem briefly --> ipv4_secondaries displays duplicate address information ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem. For new features, show how the feature would be used. --> Run ansible -m setup hostname.foo -a "filter=ansible_eth1" Receive a filtered response with eth1. Here is example of secondaries "ipv4_secondaries": [ { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" }, { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" } ], Information is repeated <!--- Paste example playbooks or commands between quotes below --> ``` ansible -m setup hostname.foo -a "filter=ansible_eth1" ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> ... ``` "ipv4_secondaries": [ { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" }, ], ... ``` ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> Received ``` ... "ipv4_secondaries": [ { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" }, { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" } ], ... ``` Posting the full verbose output ``` Loading callback plugin minimal of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc Using module file /usr/lib/python2.7/dist-packages/ansible/modules/core/system/setup.py <10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root <10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 10.10.10.83 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234 `" && echo ansible-tmp-1477526333.03-252143209167234="` echo $HOME/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234 `" ) && sleep 0'"'"'' <10.10.10.83> PUT /tmp/tmpZmW3aJ TO /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py <10.10.10.83> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r '[10.10.10.83]' <10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root <10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r 10.10.10.83 '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/ /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py && sleep 0'"'"'' <10.10.10.83> ESTABLISH SSH CONNECTION FOR USER: root <10.10.10.83> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/vagrant/.ansible/cp/ansible-ssh-%h-%p-%r -tt 10.10.10.83 '/bin/sh -c '"'"'/usr/bin/python /root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/setup.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1477526333.03-252143209167234/" > /dev/null 2>&1 && sleep 0'"'"'' voippbx.xcastlabs.com | SUCCESS => { "ansible_facts": { "ansible_eth1": { "active": true, "device": "eth1", "features": { "busy_poll": "on [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_vlan_filter": "on [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "off [fixed]" }, "ipv4": { "address": "75.145.154.230", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" }, "ipv4_secondaries": [ { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" }, { "address": "75.145.154.231", "broadcast": "75.145.154.239", "netmask": "255.255.255.240", "network": "75.145.154.224" } ], "ipv6": [ { "address": "fe80::5c1c:e5ff:fe35:7c81", "prefix": "64", "scope": "link" } ], "macaddress": "5e:1c:e5:35:7c:81", "module": "virtio_net", "mtu": 1500, "pciid": "virtio4", "promisc": false, "type": "ether" } }, "changed": false, "invocation": { "module_args": { "fact_path": "/etc/ansible/facts.d", "filter": "ansible_eth1", "gather_subset": [ "all" ], "gather_timeout": 10 }, "module_name": "setup" } } ```
main
secondaries displays duplicate information issue type bug report component name setup ansible version ansible config file home vagrant ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables enabled smart gathering gathering smart os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary secondaries displays duplicate address information steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used run ansible m setup hostname foo a filter ansible receive a filtered response with here is example of secondaries secondaries address broadcast netmask network address broadcast netmask network information is repeated ansible m setup hostname foo a filter ansible expected results secondaries address broadcast netmask network actual results received secondaries address broadcast netmask network address broadcast netmask network posting the full verbose output loading callback plugin minimal of type stdout from usr lib dist packages ansible plugins callback init pyc using module file usr lib dist packages ansible modules core system setup py establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to root ansible tmp ansible tmp setup py ssh exec sftp b vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp setup py sleep establish ssh connection for user root ssh exec ssh vvv c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home vagrant ansible cp ansible ssh h p r tt bin sh c usr bin python root ansible tmp ansible tmp setup py rm rf root ansible tmp ansible tmp dev null sleep voippbx xcastlabs com success ansible facts ansible active true device features busy poll on fcoe mtu off generic receive offload on generic segmentation offload on highdma on fwd offload off large receive offload off loopback off netns local off ntuple filters off receive hashing off rx all off rx checksumming on rx fcs off rx vlan filter on rx vlan offload off rx vlan stag filter off rx vlan stag hw parse off scatter gather on tcp segmentation offload on tx checksum fcoe crc off tx checksum ip generic on tx checksum off tx checksum off tx checksum sctp off tx checksumming on tx fcoe segmentation off tx gre segmentation off tx gso robust on tx ipip segmentation off tx lockless off tx nocache copy off tx scatter gather on tx scatter gather fraglist off tx sit segmentation off tx segmentation on tx tcp ecn segmentation on tx tcp segmentation on tx udp tnl segmentation off tx vlan offload off tx vlan stag hw insert off udp fragmentation offload on vlan challenged off address broadcast netmask network secondaries address broadcast netmask network address broadcast netmask network address prefix scope link macaddress module virtio net mtu pciid promisc false type ether changed false invocation module args fact path etc ansible facts d filter ansible gather subset all gather timeout module name setup
1
180,898
13,964,602,627
IssuesEvent
2020-10-25 18:49:35
DevLan-Support/tickets
https://api.github.com/repos/DevLan-Support/tickets
opened
[BUG] Task #4884662 for Kingdoms v1.9.1.3 generated an exception
Requires Testing
[19:47:12]: [WARN] [Kingdoms] Task #4884662 for Kingdoms v1.9.1.3 generated an exception java.lang.ClassCastException: class org.bukkit.craftbukkit.v1_16_R2.block.CraftBlockState cannot be cast to class org.bukkit.block.Chest (org.bukkit.craftbukkit.v1_16_R2.block.CraftBlockState and org.bukkit.block.Chest are in unnamed module of loader 'app') at org.kingdoms.managers.ProtectionSignsManager.lambda$doubleChestProtectNotifier$5(ProtectionSignsManager.java:381) ~[?:?] at org.bukkit.craftbukkit.v1_16_R2.scheduler.CraftTask.run(CraftTask.java:99) ~[patched_1.16.3.jar:git-Paper-246] at org.bukkit.craftbukkit.v1_16_R2.scheduler.CraftScheduler.mainThreadHeartbeat(CraftScheduler.java:468) ~[patched_1.16.3.jar:git-Paper-246] at net.minecraft.server.v1_16_R2.MinecraftServer.b(MinecraftServer.java:1296) ~[patched_1.16.3.jar:git-Paper-246] at net.minecraft.server.v1_16_R2.DedicatedServer.b(DedicatedServer.java:371) ~[patched_1.16.3.jar:git-Paper-246] at net.minecraft.server.v1_16_R2.MinecraftServer.a(MinecraftServer.java:1211) ~[patched_1.16.3.jar:git-Paper-246] at net.minecraft.server.v1_16_R2.MinecraftServer.w(MinecraftServer.java:999) ~[patched_1.16.3.jar:git-Paper-246] at net.minecraft.server.v1_16_R2.MinecraftServer.lambda$a$0(MinecraftServer.java:177) ~[patched_1.16.3.jar:git-Paper-246] at java.lang.Thread.run(Thread.java:834) [?:?]
1.0
[BUG] Task #4884662 for Kingdoms v1.9.1.3 generated an exception - [19:47:12]: [WARN] [Kingdoms] Task #4884662 for Kingdoms v1.9.1.3 generated an exception java.lang.ClassCastException: class org.bukkit.craftbukkit.v1_16_R2.block.CraftBlockState cannot be cast to class org.bukkit.block.Chest (org.bukkit.craftbukkit.v1_16_R2.block.CraftBlockState and org.bukkit.block.Chest are in unnamed module of loader 'app') at org.kingdoms.managers.ProtectionSignsManager.lambda$doubleChestProtectNotifier$5(ProtectionSignsManager.java:381) ~[?:?] at org.bukkit.craftbukkit.v1_16_R2.scheduler.CraftTask.run(CraftTask.java:99) ~[patched_1.16.3.jar:git-Paper-246] at org.bukkit.craftbukkit.v1_16_R2.scheduler.CraftScheduler.mainThreadHeartbeat(CraftScheduler.java:468) ~[patched_1.16.3.jar:git-Paper-246] at net.minecraft.server.v1_16_R2.MinecraftServer.b(MinecraftServer.java:1296) ~[patched_1.16.3.jar:git-Paper-246] at net.minecraft.server.v1_16_R2.DedicatedServer.b(DedicatedServer.java:371) ~[patched_1.16.3.jar:git-Paper-246] at net.minecraft.server.v1_16_R2.MinecraftServer.a(MinecraftServer.java:1211) ~[patched_1.16.3.jar:git-Paper-246] at net.minecraft.server.v1_16_R2.MinecraftServer.w(MinecraftServer.java:999) ~[patched_1.16.3.jar:git-Paper-246] at net.minecraft.server.v1_16_R2.MinecraftServer.lambda$a$0(MinecraftServer.java:177) ~[patched_1.16.3.jar:git-Paper-246] at java.lang.Thread.run(Thread.java:834) [?:?]
non_main
task for kingdoms generated an exception task for kingdoms generated an exception java lang classcastexception class org bukkit craftbukkit block craftblockstate cannot be cast to class org bukkit block chest org bukkit craftbukkit block craftblockstate and org bukkit block chest are in unnamed module of loader app at org kingdoms managers protectionsignsmanager lambda doublechestprotectnotifier protectionsignsmanager java at org bukkit craftbukkit scheduler crafttask run crafttask java at org bukkit craftbukkit scheduler craftscheduler mainthreadheartbeat craftscheduler java at net minecraft server minecraftserver b minecraftserver java at net minecraft server dedicatedserver b dedicatedserver java at net minecraft server minecraftserver a minecraftserver java at net minecraft server minecraftserver w minecraftserver java at net minecraft server minecraftserver lambda a minecraftserver java at java lang thread run thread java
0
89,518
11,251,444,017
IssuesEvent
2020-01-11 00:29:37
chapel-lang/chapel
https://api.github.com/repos/chapel-lang/chapel
closed
Remove qualified access from `use` statements?
area: Language type: Design user issue
Should `use` statements enable qualified access? E.g. ``` chapel module Lib { var someVariable = 1; } module LibUser { use Lib; } module Main { use LibUser; var x = Lib.someVariable; // Lib.someVariable is qualified access } ``` Note that there is some support for allowing `use Lib` to allow qualified access within the module containing that statement (so later on in `module LibUser` in the above example there could be an expression like `Lib.someVariable`). Reasons to keep qualified access for `use`: * it would be a breaking change and it relies on `import` existing * qualified access is still available for resolving naming conflict between modules in the common case that modules are `use`d Reasons to remove qualified access for `use`: * avoids some unintended problems from qualified + unqualified access in the same scope * it separates concerns between `use` and `import` to allow for greater control of symbol visibility * `public use Lib` within M would hide the fact that symbols come from `Lib` (so `M.Lib` will not be available) * `public import Lib` within M would expose `Lib` within `M` (so `M.Lib` will be available) * This is nice because making `import` public or private controls the scope of the module imported; while making a `use` public or private only affects the contents of the module `use`d. --- Forked from https://github.com/chapel-lang/chapel/issues/13119#issuecomment-520984841 and https://github.com/chapel-lang/chapel/issues/11262#issuecomment-526671328. Also, this topic is the first of the Open Questions from #13831. Today, Chapel's `use` statement does three things AFAICT: 1. unqualified access to a module's symbols in current scope 2. qualified access to the symbol in current scope 3. if `public use` in current scope and current scope is a module declaration, a dependent scope can inherit behaviors 1 and 2 through another `use` statement Having both qualified and unqualified accesses being allowed by one statement causes some unintended problems: #11262, #13925, https://github.com/chapel-lang/chapel/issues/13925#issuecomment-526399389 So it got me thinking, how often would it be that a user actually wants both qualified and unqualified access to a symbol? Rarely. It would be better to separate these behaviors as separate statements. **This proposal is to remove qualified access on `use` statements.** Caveat: `use` statements would only do unqualified access, so users that prefer qualified access need `import` statements #13119 #13831 to be implemented before this proposal is accepted. One idiom today is `use only` to allow only qualified access: ```chapel use M only; use M except *; ``` Under this proposal, these statements effectively become no-ops. **This would be a breaking change** and the best long-term course is to introduce a compiler error to recommend using `import` instead. (It could be a warning, but I don't see how warnings that allow no-op behaviors are useful when the user is likely coming from outdated information or is confused, so their code will likely break later anyway.)
1.0
Remove qualified access from `use` statements? - Should `use` statements enable qualified access? E.g. ``` chapel module Lib { var someVariable = 1; } module LibUser { use Lib; } module Main { use LibUser; var x = Lib.someVariable; // Lib.someVariable is qualified access } ``` Note that there is some support for allowing `use Lib` to allow qualified access within the module containing that statement (so later on in `module LibUser` in the above example there could be an expression like `Lib.someVariable`). Reasons to keep qualified access for `use`: * it would be a breaking change and it relies on `import` existing * qualified access is still available for resolving naming conflict between modules in the common case that modules are `use`d Reasons to remove qualified access for `use`: * avoids some unintended problems from qualified + unqualified access in the same scope * it separates concerns between `use` and `import` to allow for greater control of symbol visibility * `public use Lib` within M would hide the fact that symbols come from `Lib` (so `M.Lib` will not be available) * `public import Lib` within M would expose `Lib` within `M` (so `M.Lib` will be available) * This is nice because making `import` public or private controls the scope of the module imported; while making a `use` public or private only affects the contents of the module `use`d. --- Forked from https://github.com/chapel-lang/chapel/issues/13119#issuecomment-520984841 and https://github.com/chapel-lang/chapel/issues/11262#issuecomment-526671328. Also, this topic is the first of the Open Questions from #13831. Today, Chapel's `use` statement does three things AFAICT: 1. unqualified access to a module's symbols in current scope 2. qualified access to the symbol in current scope 3. if `public use` in current scope and current scope is a module declaration, a dependent scope can inherit behaviors 1 and 2 through another `use` statement Having both qualified and unqualified accesses being allowed by one statement causes some unintended problems: #11262, #13925, https://github.com/chapel-lang/chapel/issues/13925#issuecomment-526399389 So it got me thinking, how often would it be that a user actually wants both qualified and unqualified access to a symbol? Rarely. It would be better to separate these behaviors as separate statements. **This proposal is to remove qualified access on `use` statements.** Caveat: `use` statements would only do unqualified access, so users that prefer qualified access need `import` statements #13119 #13831 to be implemented before this proposal is accepted. One idiom today is `use only` to allow only qualified access: ```chapel use M only; use M except *; ``` Under this proposal, these statements effectively become no-ops. **This would be a breaking change** and the best long-term course is to introduce a compiler error to recommend using `import` instead. (It could be a warning, but I don't see how warnings that allow no-op behaviors are useful when the user is likely coming from outdated information or is confused, so their code will likely break later anyway.)
non_main
remove qualified access from use statements should use statements enable qualified access e g chapel module lib var somevariable module libuser use lib module main use libuser var x lib somevariable lib somevariable is qualified access note that there is some support for allowing use lib to allow qualified access within the module containing that statement so later on in module libuser in the above example there could be an expression like lib somevariable reasons to keep qualified access for use it would be a breaking change and it relies on import existing qualified access is still available for resolving naming conflict between modules in the common case that modules are use d reasons to remove qualified access for use avoids some unintended problems from qualified unqualified access in the same scope it separates concerns between use and import to allow for greater control of symbol visibility public use lib within m would hide the fact that symbols come from lib so m lib will not be available public import lib within m would expose lib within m so m lib will be available this is nice because making import public or private controls the scope of the module imported while making a use public or private only affects the contents of the module use d forked from and also this topic is the first of the open questions from today chapel s use statement does three things afaict unqualified access to a module s symbols in current scope qualified access to the symbol in current scope if public use in current scope and current scope is a module declaration a dependent scope can inherit behaviors and through another use statement having both qualified and unqualified accesses being allowed by one statement causes some unintended problems so it got me thinking how often would it be that a user actually wants both qualified and unqualified access to a symbol rarely it would be better to separate these behaviors as separate statements this proposal is to remove qualified access on use statements caveat use statements would only do unqualified access so users that prefer qualified access need import statements to be implemented before this proposal is accepted one idiom today is use only to allow only qualified access chapel use m only use m except under this proposal these statements effectively become no ops this would be a breaking change and the best long term course is to introduce a compiler error to recommend using import instead it could be a warning but i don t see how warnings that allow no op behaviors are useful when the user is likely coming from outdated information or is confused so their code will likely break later anyway
0
3,134
12,034,215,726
IssuesEvent
2020-04-13 15:38:34
alacritty/alacritty
https://api.github.com/repos/alacritty/alacritty
closed
CopyPaste Action
C - waiting on maintainer
In the recent Terminal Emulator for Windows10 they implemented copy and paste on right click. To also get that in Alacritty, I would like an "action" which does "Copy" if there is a selection and "Paste" if there is none. This seems like a pretty simple implementation. If you point me in the right direction I could do the merge request myself.
True
CopyPaste Action - In the recent Terminal Emulator for Windows10 they implemented copy and paste on right click. To also get that in Alacritty, I would like an "action" which does "Copy" if there is a selection and "Paste" if there is none. This seems like a pretty simple implementation. If you point me in the right direction I could do the merge request myself.
main
copypaste action in the recent terminal emulator for they implemented copy and paste on right click to also get that in alacritty i would like an action which does copy if there is a selection and paste if there is none this seems like a pretty simple implementation if you point me in the right direction i could do the merge request myself
1
4,160
19,957,861,448
IssuesEvent
2022-01-28 02:53:25
microsoft/UVAtlas
https://api.github.com/repos/microsoft/UVAtlas
opened
Retire VS 2017 support
maintainence
Visual Studio 2017 reaches it's [mainstream end-of-life](https://docs.microsoft.com/en-us/lifecycle/products/visual-studio-2017) on **April 2022**. I should retire these projects that time: * UVAtlas_2017_Win10.vcxproj * UVAtlas_Windows10_2017.vcxproj > I am not sure when I'll be retiring Xbox One XDK support which is not supported for VS 2019 or later. That means I'm not sure if I'll delete ``UVAtlas_XboxOneXDK_2017.vcxproj`` or not with this change.
True
Retire VS 2017 support - Visual Studio 2017 reaches it's [mainstream end-of-life](https://docs.microsoft.com/en-us/lifecycle/products/visual-studio-2017) on **April 2022**. I should retire these projects that time: * UVAtlas_2017_Win10.vcxproj * UVAtlas_Windows10_2017.vcxproj > I am not sure when I'll be retiring Xbox One XDK support which is not supported for VS 2019 or later. That means I'm not sure if I'll delete ``UVAtlas_XboxOneXDK_2017.vcxproj`` or not with this change.
main
retire vs support visual studio reaches it s on april i should retire these projects that time uvatlas vcxproj uvatlas vcxproj i am not sure when i ll be retiring xbox one xdk support which is not supported for vs or later that means i m not sure if i ll delete uvatlas xboxonexdk vcxproj or not with this change
1
108,006
4,323,489,948
IssuesEvent
2016-07-25 17:09:12
enviPath/enviPath
https://api.github.com/repos/enviPath/enviPath
opened
Link from pathway to the node should be directed to the structure entry on the compound page
high priority user interface
At the moment, if you click on the node in the pathway, you get redirected to the structure. This should be changed to redirect to the compound page, but to an anchor at the correct structure.
1.0
Link from pathway to the node should be directed to the structure entry on the compound page - At the moment, if you click on the node in the pathway, you get redirected to the structure. This should be changed to redirect to the compound page, but to an anchor at the correct structure.
non_main
link from pathway to the node should be directed to the structure entry on the compound page at the moment if you click on the node in the pathway you get redirected to the structure this should be changed to redirect to the compound page but to an anchor at the correct structure
0
2,666
9,122,241,834
IssuesEvent
2019-02-23 05:50:19
varenc/homebrew-ffmpeg
https://api.github.com/repos/varenc/homebrew-ffmpeg
closed
Tesseract
maintainer-feedback
There are now two flavours of Tesseract: [tessseract](https://github.com/Homebrew/homebrew-core/blob/master/Formula/tesseract.rb) and [tesseract-lang](https://github.com/Homebrew/homebrew-core/blob/master/Formula/tesseract-lang.rb). I guess we should allow both: - `--with-tesseract` which comes only with English - `--with-tesseract-lang` which enables support for other languages
True
Tesseract - There are now two flavours of Tesseract: [tessseract](https://github.com/Homebrew/homebrew-core/blob/master/Formula/tesseract.rb) and [tesseract-lang](https://github.com/Homebrew/homebrew-core/blob/master/Formula/tesseract-lang.rb). I guess we should allow both: - `--with-tesseract` which comes only with English - `--with-tesseract-lang` which enables support for other languages
main
tesseract there are now two flavours of tesseract and i guess we should allow both with tesseract which comes only with english with tesseract lang which enables support for other languages
1
1,850
6,577,390,846
IssuesEvent
2017-09-12 00:35:00
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
ec2_vpc_net always returns "changed" when state=present
affects_2.2 aws bug_report cloud waiting_on_maintainer
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc_net ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 56ba10365c) last updated 2016/05/05 17:12:03 (GMT +550) ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT N/A ##### SUMMARY ec2_vpc_net always sets "changed" when state=present, because the code calls update_vpc_tags() and sets changed if `tags is not None or name is not None`, regardless of whether the vpc exists or not. So if you specify name and cidr_block, the tags are always updated. cc: @defionscode ##### STEPS TO REPRODUCE ``` - ec2_vpc_net: state: present name: ExampleVPC cidr_block: 192.0.2.0/24 register: vpc - fail: msg="I created a VPC" when: vpc.changed ```
True
ec2_vpc_net always returns "changed" when state=present - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME ec2_vpc_net ##### ANSIBLE VERSION ``` ansible 2.2.0 (devel 56ba10365c) last updated 2016/05/05 17:12:03 (GMT +550) ``` ##### CONFIGURATION Default ##### OS / ENVIRONMENT N/A ##### SUMMARY ec2_vpc_net always sets "changed" when state=present, because the code calls update_vpc_tags() and sets changed if `tags is not None or name is not None`, regardless of whether the vpc exists or not. So if you specify name and cidr_block, the tags are always updated. cc: @defionscode ##### STEPS TO REPRODUCE ``` - ec2_vpc_net: state: present name: ExampleVPC cidr_block: 192.0.2.0/24 register: vpc - fail: msg="I created a VPC" when: vpc.changed ```
main
vpc net always returns changed when state present issue type bug report component name vpc net ansible version ansible devel last updated gmt configuration default os environment n a summary vpc net always sets changed when state present because the code calls update vpc tags and sets changed if tags is not none or name is not none regardless of whether the vpc exists or not so if you specify name and cidr block the tags are always updated cc defionscode steps to reproduce vpc net state present name examplevpc cidr block register vpc fail msg i created a vpc when vpc changed
1
852
4,513,273,376
IssuesEvent
2016-09-04 06:20:10
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
zypper_repository module should have a fingerprint option
feature_idea waiting_on_maintainer
When adding a repository using zypper_repository, its public key may not automatically be stored as trusted. In case of the "zypper" command, it would ask interactively whether a certain GPG key fingerprint would be accepted by the user or not. However, when scripting this using the ansible zypper_repository module, this is not possible. Some users use the "disable_gpg_check" option, but this disables the GPG check completely, thus opening a security vulnerability. Thus, the user of the ansible zypper_repository module should be able to explicitly specify an acceptable GPG key (or multiple acceptable GPG keys) by verbatimly quoting the fingerprint of that key. This way, it is prevented that untrusted software is installed using ansible. ## Example Instead of ``` zypper_repository: repo=http://download.opensuse.org/repositories/Application:/Geo/openSUSE_13.1/ name=/Application:/Geo/openSUSE_13.1/ state=present disable_gpg_check=yes ``` use ``` zypper_repository: repo=http://download.opensuse.org/repositories/Application:/Geo/openSUSE_13.1/ name=/Application:/Geo/openSUSE_13.1/ state=present acceptable_gpg_key_fingerprint=195E211106BC205D2A9C2222CC7F07489591C39B ```
True
zypper_repository module should have a fingerprint option - When adding a repository using zypper_repository, its public key may not automatically be stored as trusted. In case of the "zypper" command, it would ask interactively whether a certain GPG key fingerprint would be accepted by the user or not. However, when scripting this using the ansible zypper_repository module, this is not possible. Some users use the "disable_gpg_check" option, but this disables the GPG check completely, thus opening a security vulnerability. Thus, the user of the ansible zypper_repository module should be able to explicitly specify an acceptable GPG key (or multiple acceptable GPG keys) by verbatimly quoting the fingerprint of that key. This way, it is prevented that untrusted software is installed using ansible. ## Example Instead of ``` zypper_repository: repo=http://download.opensuse.org/repositories/Application:/Geo/openSUSE_13.1/ name=/Application:/Geo/openSUSE_13.1/ state=present disable_gpg_check=yes ``` use ``` zypper_repository: repo=http://download.opensuse.org/repositories/Application:/Geo/openSUSE_13.1/ name=/Application:/Geo/openSUSE_13.1/ state=present acceptable_gpg_key_fingerprint=195E211106BC205D2A9C2222CC7F07489591C39B ```
main
zypper repository module should have a fingerprint option when adding a repository using zypper repository its public key may not automatically be stored as trusted in case of the zypper command it would ask interactively whether a certain gpg key fingerprint would be accepted by the user or not however when scripting this using the ansible zypper repository module this is not possible some users use the disable gpg check option but this disables the gpg check completely thus opening a security vulnerability thus the user of the ansible zypper repository module should be able to explicitly specify an acceptable gpg key or multiple acceptable gpg keys by verbatimly quoting the fingerprint of that key this way it is prevented that untrusted software is installed using ansible example instead of zypper repository repo name application geo opensuse state present disable gpg check yes use zypper repository repo name application geo opensuse state present acceptable gpg key fingerprint
1
2,848
10,219,571,421
IssuesEvent
2019-08-15 18:56:08
arcticicestudio/styleguide-javascript
https://api.github.com/repos/arcticicestudio/styleguide-javascript
closed
lint-staged
context-workflow scope-dx scope-maintainability scope-quality type-feature
<p align="center"><img src="https://user-images.githubusercontent.com/7836623/48658851-01e38400-ea49-11e8-911e-d859eefe6dd5.png" width="25%" /></p> > Epic: #8 > Must be resolved **after** #11 Integrate [lint-staged][gh-lint-staged] to run linters against staged Git files to prevent to add code that violates any style guide into the code base. <p align="center"><img src="https://raw.githubusercontent.com/okonet/lint-staged/master/screenshots/lint-staged-prettier.gif" width="80%" /></p> ### Configuration The configuration file `lint-staged.config.js` will be placed in the project root and includes the command that should be run for matching file extensions (globs). It will include at least the three following entries with the same order as listed here: 1. `prettier --list-different` - Run Prettier (#11) against `*.{js,json,yml}` to ensure all files are formatted correctly. The `--list-different` prints the found files that are not conform to the Prettier configuration. 2. `eslint` - Run ESLint against `*.{js}` to ensure all JavaScript files are compliant to the style guide after being formatted with Prettier. 3. `remark --no-stdout` - Run remark-lint against `*.md` to ensure all Markdown files are compliant to the style guide. The `--no-stdout` flag suppresses the output of the parsed file content. ## Tasks - [x] Install [lint-staged][npm-lint-staged] package. - [x] Implement `lint-staged.config.js` configuration file. [gh-lint-staged]: https://github.com/okonet/lint-staged [npm-lint-staged]: https://www.npmjs.com/package/lint-staged
True
lint-staged - <p align="center"><img src="https://user-images.githubusercontent.com/7836623/48658851-01e38400-ea49-11e8-911e-d859eefe6dd5.png" width="25%" /></p> > Epic: #8 > Must be resolved **after** #11 Integrate [lint-staged][gh-lint-staged] to run linters against staged Git files to prevent to add code that violates any style guide into the code base. <p align="center"><img src="https://raw.githubusercontent.com/okonet/lint-staged/master/screenshots/lint-staged-prettier.gif" width="80%" /></p> ### Configuration The configuration file `lint-staged.config.js` will be placed in the project root and includes the command that should be run for matching file extensions (globs). It will include at least the three following entries with the same order as listed here: 1. `prettier --list-different` - Run Prettier (#11) against `*.{js,json,yml}` to ensure all files are formatted correctly. The `--list-different` prints the found files that are not conform to the Prettier configuration. 2. `eslint` - Run ESLint against `*.{js}` to ensure all JavaScript files are compliant to the style guide after being formatted with Prettier. 3. `remark --no-stdout` - Run remark-lint against `*.md` to ensure all Markdown files are compliant to the style guide. The `--no-stdout` flag suppresses the output of the parsed file content. ## Tasks - [x] Install [lint-staged][npm-lint-staged] package. - [x] Implement `lint-staged.config.js` configuration file. [gh-lint-staged]: https://github.com/okonet/lint-staged [npm-lint-staged]: https://www.npmjs.com/package/lint-staged
main
lint staged epic must be resolved after integrate to run linters against staged git files to prevent to add code that violates any style guide into the code base configuration the configuration file lint staged config js will be placed in the project root and includes the command that should be run for matching file extensions globs it will include at least the three following entries with the same order as listed here prettier list different run prettier against js json yml to ensure all files are formatted correctly the list different prints the found files that are not conform to the prettier configuration eslint run eslint against js to ensure all javascript files are compliant to the style guide after being formatted with prettier remark no stdout run remark lint against md to ensure all markdown files are compliant to the style guide the no stdout flag suppresses the output of the parsed file content tasks install package implement lint staged config js configuration file
1
299,441
22,606,465,556
IssuesEvent
2022-06-29 13:39:22
appsmithorg/appsmith
https://api.github.com/repos/appsmithorg/appsmith
opened
[Docs]: Java heap specs which Appsmith is hosted
Documentation
<p>Users would like to know about necessary resources for hosting Appsmith, and information about improving Appsmith instance performance by increasing available resources (via the `APPSMITH_JAVA_HEAP_ARG` variable in the .env file).</p> <p>Discord thread here: https://discord.com/channels/725602949748752515/760761686549463060/991580506988748893</p>
1.0
[Docs]: Java heap specs which Appsmith is hosted - <p>Users would like to know about necessary resources for hosting Appsmith, and information about improving Appsmith instance performance by increasing available resources (via the `APPSMITH_JAVA_HEAP_ARG` variable in the .env file).</p> <p>Discord thread here: https://discord.com/channels/725602949748752515/760761686549463060/991580506988748893</p>
non_main
java heap specs which appsmith is hosted users would like to know about necessary resources for hosting appsmith and information about improving appsmith instance performance by increasing available resources via the appsmith java heap arg variable in the env file discord thread here
0
5,813
30,790,981,731
IssuesEvent
2023-07-31 16:08:37
obi1kenobi/trustfall
https://api.github.com/repos/obi1kenobi/trustfall
closed
Test-drive adapters to ensure common edge cases are handled correctly
A-adapter A-errors C-enhancement C-maintainability E-help-wanted E-mentor E-medium
Before using a new adapter, Trustfall could "test drive" it to make sure it adequately handles edge cases: - call `resolve_property` with a `None` active vertex for some property and assert that it got a `FieldValue::Null` property value - call `resolve_neighbors` with a `None` active vertex for some edge and assert that it got an empty iterable of neighbors - call `resolve_coercion` with a `None` active vertex for some plausible type coercion (if any) and assert that it got a `false` result - perhaps even assert that sending multiple contexts into these functions means the contexts are returned in the same order. This will require some schema introspection (to generate valid type / property / edge / coercion values) but should be cheap perf-wise. It could be implemented transparently in the `trustfall` crate with an optional default-enabled feature. Particularly perf-sensitive applications could opt out of the feature.
True
Test-drive adapters to ensure common edge cases are handled correctly - Before using a new adapter, Trustfall could "test drive" it to make sure it adequately handles edge cases: - call `resolve_property` with a `None` active vertex for some property and assert that it got a `FieldValue::Null` property value - call `resolve_neighbors` with a `None` active vertex for some edge and assert that it got an empty iterable of neighbors - call `resolve_coercion` with a `None` active vertex for some plausible type coercion (if any) and assert that it got a `false` result - perhaps even assert that sending multiple contexts into these functions means the contexts are returned in the same order. This will require some schema introspection (to generate valid type / property / edge / coercion values) but should be cheap perf-wise. It could be implemented transparently in the `trustfall` crate with an optional default-enabled feature. Particularly perf-sensitive applications could opt out of the feature.
main
test drive adapters to ensure common edge cases are handled correctly before using a new adapter trustfall could test drive it to make sure it adequately handles edge cases call resolve property with a none active vertex for some property and assert that it got a fieldvalue null property value call resolve neighbors with a none active vertex for some edge and assert that it got an empty iterable of neighbors call resolve coercion with a none active vertex for some plausible type coercion if any and assert that it got a false result perhaps even assert that sending multiple contexts into these functions means the contexts are returned in the same order this will require some schema introspection to generate valid type property edge coercion values but should be cheap perf wise it could be implemented transparently in the trustfall crate with an optional default enabled feature particularly perf sensitive applications could opt out of the feature
1
318,084
27,284,343,349
IssuesEvent
2023-02-23 12:26:09
WordPress/gutenberg
https://api.github.com/repos/WordPress/gutenberg
opened
[Flaky Test] Save flow should allow re-saving after changing the same block attribute
[Type] Flaky Test
<!-- __META_DATA__:{} --> **Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.** ## Test title Save flow should allow re-saving after changing the same block attribute ## Test path `specs/site-editor/multi-entity-saving.test.js` ## Errors <!-- __TEST_RESULTS_LIST__ --> <!-- __TEST_RESULT__ --><details> <summary> <time datetime="2023-02-23T12:26:08.455Z"><code>[2023-02-23T12:26:08.455Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/4252312068"><code>fix/i18n-strings</code></a>. </summary> ``` ● Multi-entity save flow › Site Editor › Save flow should allow re-saving after changing the same block attribute No node found for selector: .edit-post-header [aria-label="Add block"],.edit-site-header [aria-label="Add block"],.edit-post-header [aria-label="Toggle block inserter"],.edit-site-header [aria-label="Toggle block inserter"],.edit-widgets-header [aria-label="Add block"],.edit-widgets-header [aria-label="Toggle block inserter"],.edit-site-header-edit-mode__inserter-toggle at assert (../../node_modules/puppeteer-core/src/common/assert.ts:23:21) at DOMWorld.click (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:461:11) at runMicrotasks (<anonymous>) at toggleGlobalBlockInserter (../e2e-test-utils/build/@wordpress/e2e-test-utils/src/inserter.js:66:2) at openGlobalBlockInserter (../e2e-test-utils/build/@wordpress/e2e-test-utils/src/inserter.js:22:3) at insertFromGlobalInserter (../e2e-test-utils/build/@wordpress/e2e-test-utils/src/inserter.js:229:2) at insertBlock (../e2e-test-utils/build/@wordpress/e2e-test-utils/src/inserter.js:330:2) at Object.<anonymous> (specs/site-editor/multi-entity-saving.test.js:312:4) ``` </details><!-- /__TEST_RESULT__ --> <!-- /__TEST_RESULTS_LIST__ -->
1.0
[Flaky Test] Save flow should allow re-saving after changing the same block attribute - <!-- __META_DATA__:{} --> **Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.** ## Test title Save flow should allow re-saving after changing the same block attribute ## Test path `specs/site-editor/multi-entity-saving.test.js` ## Errors <!-- __TEST_RESULTS_LIST__ --> <!-- __TEST_RESULT__ --><details> <summary> <time datetime="2023-02-23T12:26:08.455Z"><code>[2023-02-23T12:26:08.455Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/4252312068"><code>fix/i18n-strings</code></a>. </summary> ``` ● Multi-entity save flow › Site Editor › Save flow should allow re-saving after changing the same block attribute No node found for selector: .edit-post-header [aria-label="Add block"],.edit-site-header [aria-label="Add block"],.edit-post-header [aria-label="Toggle block inserter"],.edit-site-header [aria-label="Toggle block inserter"],.edit-widgets-header [aria-label="Add block"],.edit-widgets-header [aria-label="Toggle block inserter"],.edit-site-header-edit-mode__inserter-toggle at assert (../../node_modules/puppeteer-core/src/common/assert.ts:23:21) at DOMWorld.click (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:461:11) at runMicrotasks (<anonymous>) at toggleGlobalBlockInserter (../e2e-test-utils/build/@wordpress/e2e-test-utils/src/inserter.js:66:2) at openGlobalBlockInserter (../e2e-test-utils/build/@wordpress/e2e-test-utils/src/inserter.js:22:3) at insertFromGlobalInserter (../e2e-test-utils/build/@wordpress/e2e-test-utils/src/inserter.js:229:2) at insertBlock (../e2e-test-utils/build/@wordpress/e2e-test-utils/src/inserter.js:330:2) at Object.<anonymous> (specs/site-editor/multi-entity-saving.test.js:312:4) ``` </details><!-- /__TEST_RESULT__ --> <!-- /__TEST_RESULTS_LIST__ -->
non_main
save flow should allow re saving after changing the same block attribute flaky test detected this is an auto generated issue by github actions please do not edit this manually test title save flow should allow re saving after changing the same block attribute test path specs site editor multi entity saving test js errors test passed after failed attempt on a href ● multi entity save flow › site editor › save flow should allow re saving after changing the same block attribute no node found for selector edit post header edit site header edit post header edit site header edit widgets header edit widgets header edit site header edit mode inserter toggle at assert node modules puppeteer core src common assert ts at domworld click node modules puppeteer core src common domworld ts at runmicrotasks at toggleglobalblockinserter test utils build wordpress test utils src inserter js at openglobalblockinserter test utils build wordpress test utils src inserter js at insertfromglobalinserter test utils build wordpress test utils src inserter js at insertblock test utils build wordpress test utils src inserter js at object specs site editor multi entity saving test js
0
1,299
5,541,703,969
IssuesEvent
2017-03-22 13:32:17
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Ad hoc usage of ec2_tag module results in AttributeError: 'str' object has no attribute 'items'
affects_2.0 aws bug_report cloud waiting_on_maintainer
##### Issue Type: - Bug Report ##### Component Name: ec2_tag module ##### Ansible Version: ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: When trying to set a tag using module 'ec2_tag', set fails with an error. I have tried several different variants to escape ' or " and to use ' or " to surround the tags field and for the entire -a attributes value field. I stepped through the code and the input tags= value is always a string and not parsed as a dict properly. ##### Steps To Reproduce: These, and variants of, all yield the same parse error: ``` ansible 'localhost' -m ec2_tag -a 'resource=i-074lkeke region=us-west-2 tags={\"Name\":\"foo\"}' -vvv ansible 'localhost' -m ec2_tag -a "resource=i-074lkeke region=us-west-2 tags='{\"Name\":\"server1\"}'" -vvv ``` ##### Expected Results: Success > New Tag created for instance. ##### Actual Results: ``` No config file found; using defaults ESTABLISH LOCAL CONNECTION FOR USER: kfletcher 127.0.0.1 EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058 `" )' 127.0.0.1 PUT /var/folders/zt/7_vhqsms595dwgk_my5_y26m0000gn/T/tmpRTl37e TO /Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag 127.0.0.1 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag; rm -rf "/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/" > /dev/null 2>&1' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag", line 2368, in <module> main() File "/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag", line 152, in main if set(tags.items()).issubset(set(tagdict.items())): AttributeError: 'str' object has no attribute 'items' localhost | FAILED! => { "changed": false, "failed": true, "invocation": { "module_name": "ec2_tag" }, "parsed": false } ``` FYI, reading the tags works fine: `$ ansible 'localhost' -m ec2_tag -a 'resource=i-074lkeke region=us-west-2 state=list'` ``` localhost | SUCCESS => { "changed": false, "tags": { "Name": "test" } } ``` I also got filters (also requires a dict) working using similar syntax: ``` ansible 'localhost' -m ec2_remote_facts -a 'region=us-west-2 filters={\"private-dns-name\":\"ip-10-13-49-34.us-west-2.compute.internal\"}' ```
True
Ad hoc usage of ec2_tag module results in AttributeError: 'str' object has no attribute 'items' - ##### Issue Type: - Bug Report ##### Component Name: ec2_tag module ##### Ansible Version: ``` ansible 2.0.1.0 config file = configured module search path = Default w/o overrides ``` ##### Ansible Configuration: N/A ##### Environment: N/A ##### Summary: When trying to set a tag using module 'ec2_tag', set fails with an error. I have tried several different variants to escape ' or " and to use ' or " to surround the tags field and for the entire -a attributes value field. I stepped through the code and the input tags= value is always a string and not parsed as a dict properly. ##### Steps To Reproduce: These, and variants of, all yield the same parse error: ``` ansible 'localhost' -m ec2_tag -a 'resource=i-074lkeke region=us-west-2 tags={\"Name\":\"foo\"}' -vvv ansible 'localhost' -m ec2_tag -a "resource=i-074lkeke region=us-west-2 tags='{\"Name\":\"server1\"}'" -vvv ``` ##### Expected Results: Success > New Tag created for instance. ##### Actual Results: ``` No config file found; using defaults ESTABLISH LOCAL CONNECTION FOR USER: kfletcher 127.0.0.1 EXEC /bin/sh -c '( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058 `" )' 127.0.0.1 PUT /var/folders/zt/7_vhqsms595dwgk_my5_y26m0000gn/T/tmpRTl37e TO /Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag 127.0.0.1 EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag; rm -rf "/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/" > /dev/null 2>&1' An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag", line 2368, in <module> main() File "/Users/kfletcher/.ansible/tmp/ansible-tmp-1458595023.11-22491659346058/ec2_tag", line 152, in main if set(tags.items()).issubset(set(tagdict.items())): AttributeError: 'str' object has no attribute 'items' localhost | FAILED! => { "changed": false, "failed": true, "invocation": { "module_name": "ec2_tag" }, "parsed": false } ``` FYI, reading the tags works fine: `$ ansible 'localhost' -m ec2_tag -a 'resource=i-074lkeke region=us-west-2 state=list'` ``` localhost | SUCCESS => { "changed": false, "tags": { "Name": "test" } } ``` I also got filters (also requires a dict) working using similar syntax: ``` ansible 'localhost' -m ec2_remote_facts -a 'region=us-west-2 filters={\"private-dns-name\":\"ip-10-13-49-34.us-west-2.compute.internal\"}' ```
main
ad hoc usage of tag module results in attributeerror str object has no attribute items issue type bug report component name tag module ansible version ansible config file configured module search path default w o overrides ansible configuration n a environment n a summary when trying to set a tag using module tag set fails with an error i have tried several different variants to escape or and to use or to surround the tags field and for the entire a attributes value field i stepped through the code and the input tags value is always a string and not parsed as a dict properly steps to reproduce these and variants of all yield the same parse error ansible localhost m tag a resource i region us west tags name foo vvv ansible localhost m tag a resource i region us west tags name vvv expected results success new tag created for instance actual results no config file found using defaults establish local connection for user kfletcher exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put var folders zt t to users kfletcher ansible tmp ansible tmp tag exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python users kfletcher ansible tmp ansible tmp tag rm rf users kfletcher ansible tmp ansible tmp dev null an exception occurred during task execution the full traceback is traceback most recent call last file users kfletcher ansible tmp ansible tmp tag line in main file users kfletcher ansible tmp ansible tmp tag line in main if set tags items issubset set tagdict items attributeerror str object has no attribute items localhost failed changed false failed true invocation module name tag parsed false fyi reading the tags works fine ansible localhost m tag a resource i region us west state list localhost success changed false tags name test i also got filters also requires a dict working using similar syntax ansible localhost m remote facts a region us west filters private dns name ip us west compute internal
1
221,924
7,403,748,771
IssuesEvent
2018-03-20 00:32:36
EvictionLab/eviction-maps
https://api.github.com/repos/EvictionLab/eviction-maps
closed
County search selection broken
bug high priority
Selecting counties in search throws an error, likely because of some of the recent changes to search. I can fix this now
1.0
County search selection broken - Selecting counties in search throws an error, likely because of some of the recent changes to search. I can fix this now
non_main
county search selection broken selecting counties in search throws an error likely because of some of the recent changes to search i can fix this now
0
684
4,231,988,673
IssuesEvent
2016-07-04 19:15:27
Microsoft/DirectXTex
https://api.github.com/repos/Microsoft/DirectXTex
opened
Retire Windows 8.1 Store and Windows phone 8.1 projects
maintainence
At some point we should remove support for the older versions in favor of UWP apps ``DirectXTex_Windows81.vcxproj`` ``DirectXTex_WindowsPhone81.vcxproj`` Please put any requests for continued support for one or more of these here.
True
Retire Windows 8.1 Store and Windows phone 8.1 projects - At some point we should remove support for the older versions in favor of UWP apps ``DirectXTex_Windows81.vcxproj`` ``DirectXTex_WindowsPhone81.vcxproj`` Please put any requests for continued support for one or more of these here.
main
retire windows store and windows phone projects at some point we should remove support for the older versions in favor of uwp apps directxtex vcxproj directxtex vcxproj please put any requests for continued support for one or more of these here
1
132,602
18,268,790,179
IssuesEvent
2021-10-04 11:38:16
artsking/linux-3.0.35
https://api.github.com/repos/artsking/linux-3.0.35
opened
CVE-2019-15921 (Medium) detected in linux-stable-rtv3.8.6
security vulnerability
## CVE-2019-15921 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/artsking/linux-3.0.35/commit/5992fa81c6ac1b4e9db13f5408d914525c5b7875">5992fa81c6ac1b4e9db13f5408d914525c5b7875</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netlink/genetlink.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel before 5.0.6. There is a memory leak issue when idr_alloc() fails in genl_register_family() in net/netlink/genetlink.c. <p>Publish Date: 2019-09-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15921>CVE-2019-15921</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.0.6">https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.0.6</a></p> <p>Release Date: 2019-09-04</p> <p>Fix Resolution: v5.1-rc3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-15921 (Medium) detected in linux-stable-rtv3.8.6 - ## CVE-2019-15921 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv3.8.6</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/artsking/linux-3.0.35/commit/5992fa81c6ac1b4e9db13f5408d914525c5b7875">5992fa81c6ac1b4e9db13f5408d914525c5b7875</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netlink/genetlink.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel before 5.0.6. There is a memory leak issue when idr_alloc() fails in genl_register_family() in net/netlink/genetlink.c. <p>Publish Date: 2019-09-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-15921>CVE-2019-15921</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.0.6">https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.0.6</a></p> <p>Release Date: 2019-09-04</p> <p>Fix Resolution: v5.1-rc3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files net netlink genetlink c vulnerability details an issue was discovered in the linux kernel before there is a memory leak issue when idr alloc fails in genl register family in net netlink genetlink c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
5,216
26,471,040,560
IssuesEvent
2023-01-17 07:19:42
OpenRefine/OpenRefine
https://api.github.com/repos/OpenRefine/OpenRefine
opened
Migrate to use Java distribution - Eclipse Temurin since AdoptOpenJDK is archived and not publishing security releases
bug maintainability java version compatibility CI/CD
We are missing out on security updates for the Java distribution since we have not migrated yet. As noted in https://github.com/actions/setup-java/blob/main/docs/advanced-usage.md#adopt > NOTE: Adopt OpenJDK got moved to Eclipse Temurin and won't be updated anymore. It is highly recommended to migrate workflows from adopt to temurin to keep receiving software and security updates. See more details in the [Good-bye AdoptOpenJDK post](https://blog.adoptopenjdk.net/2021/08/goodbye-adoptopenjdk-hello-adoptium/). ### To Reproduce Steps to reproduce the behavior: 1. Run the PR workflow. ### Current Results Latest JDK semver (currently jdk-17.0.5+8) from matrix is not used for build/test. ### Expected Behavior We should migrate to using Eclipse `temurin` distribution since Adopt OpenJDK `adopt` has now officially been archived and no longer providing security releases (JCK and AQAvit certified) but instead those are under `temurin`.
True
Migrate to use Java distribution - Eclipse Temurin since AdoptOpenJDK is archived and not publishing security releases - We are missing out on security updates for the Java distribution since we have not migrated yet. As noted in https://github.com/actions/setup-java/blob/main/docs/advanced-usage.md#adopt > NOTE: Adopt OpenJDK got moved to Eclipse Temurin and won't be updated anymore. It is highly recommended to migrate workflows from adopt to temurin to keep receiving software and security updates. See more details in the [Good-bye AdoptOpenJDK post](https://blog.adoptopenjdk.net/2021/08/goodbye-adoptopenjdk-hello-adoptium/). ### To Reproduce Steps to reproduce the behavior: 1. Run the PR workflow. ### Current Results Latest JDK semver (currently jdk-17.0.5+8) from matrix is not used for build/test. ### Expected Behavior We should migrate to using Eclipse `temurin` distribution since Adopt OpenJDK `adopt` has now officially been archived and no longer providing security releases (JCK and AQAvit certified) but instead those are under `temurin`.
main
migrate to use java distribution eclipse temurin since adoptopenjdk is archived and not publishing security releases we are missing out on security updates for the java distribution since we have not migrated yet as noted in note adopt openjdk got moved to eclipse temurin and won t be updated anymore it is highly recommended to migrate workflows from adopt to temurin to keep receiving software and security updates see more details in the to reproduce steps to reproduce the behavior run the pr workflow current results latest jdk semver currently jdk from matrix is not used for build test expected behavior we should migrate to using eclipse temurin distribution since adopt openjdk adopt has now officially been archived and no longer providing security releases jck and aqavit certified but instead those are under temurin
1
282,543
30,889,361,210
IssuesEvent
2023-08-04 02:36:37
maddyCode23/linux-4.1.15
https://api.github.com/repos/maddyCode23/linux-4.1.15
reopened
CVE-2019-3701 (Medium) detected in linux-stable-rtv4.1.33
Mend: dependency security vulnerability
## CVE-2019-3701 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/can/gw.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/can/gw.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in can_can_gw_rcv in net/can/gw.c in the Linux kernel through 4.19.13. The CAN frame modification rules allow bitwise logical operations that can be also applied to the can_dlc field. The privileged user "root" with CAP_NET_ADMIN can create a CAN frame modification rule that makes the data length code a higher value than the available CAN frame data size. In combination with a configured checksum calculation where the result is stored relatively to the end of the data (e.g. cgw_csum_xor_rel) the tail of the skb (e.g. frag_list pointer in skb_shared_info) can be rewritten which finally can cause a system crash. Because of a missing check, the CAN drivers may write arbitrary content beyond the data registers in the CAN controller's I/O memory when processing can-gw manipulated outgoing frames. <p>Publish Date: 2019-01-03 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-3701>CVE-2019-3701</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3701">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3701</a></p> <p>Release Date: 2019-09-03</p> <p>Fix Resolution: v5.0-rc3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-3701 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2019-3701 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/can/gw.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/can/gw.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in can_can_gw_rcv in net/can/gw.c in the Linux kernel through 4.19.13. The CAN frame modification rules allow bitwise logical operations that can be also applied to the can_dlc field. The privileged user "root" with CAP_NET_ADMIN can create a CAN frame modification rule that makes the data length code a higher value than the available CAN frame data size. In combination with a configured checksum calculation where the result is stored relatively to the end of the data (e.g. cgw_csum_xor_rel) the tail of the skb (e.g. frag_list pointer in skb_shared_info) can be rewritten which finally can cause a system crash. Because of a missing check, the CAN drivers may write arbitrary content beyond the data registers in the CAN controller's I/O memory when processing can-gw manipulated outgoing frames. <p>Publish Date: 2019-01-03 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-3701>CVE-2019-3701</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: High - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3701">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-3701</a></p> <p>Release Date: 2019-09-03</p> <p>Fix Resolution: v5.0-rc3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files net can gw c net can gw c vulnerability details an issue was discovered in can can gw rcv in net can gw c in the linux kernel through the can frame modification rules allow bitwise logical operations that can be also applied to the can dlc field the privileged user root with cap net admin can create a can frame modification rule that makes the data length code a higher value than the available can frame data size in combination with a configured checksum calculation where the result is stored relatively to the end of the data e g cgw csum xor rel the tail of the skb e g frag list pointer in skb shared info can be rewritten which finally can cause a system crash because of a missing check the can drivers may write arbitrary content beyond the data registers in the can controller s i o memory when processing can gw manipulated outgoing frames publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
2,777
9,960,819,156
IssuesEvent
2019-07-06 20:21:26
chocolatey-community/chocolatey-package-requests
https://api.github.com/repos/chocolatey-community/chocolatey-package-requests
closed
RFP - NetSetMan (Network Settings Manager)
Status: Available For Maintainer(s)
http://www.netsetman.com/en/freeware#download A handy utility to easily switch between different network configurations. There is a free version, as well as a paid version (package for the free version would be the main priority). From the download page: > 3in1: Setup, Update & Portable in one file! > License: Non-Commercial Freeware > Release date: 2017-11-29 > Language: Multilingual > For Windows: XP/Vista/7/8/10 (32/64 Bit)
True
RFP - NetSetMan (Network Settings Manager) - http://www.netsetman.com/en/freeware#download A handy utility to easily switch between different network configurations. There is a free version, as well as a paid version (package for the free version would be the main priority). From the download page: > 3in1: Setup, Update & Portable in one file! > License: Non-Commercial Freeware > Release date: 2017-11-29 > Language: Multilingual > For Windows: XP/Vista/7/8/10 (32/64 Bit)
main
rfp netsetman network settings manager a handy utility to easily switch between different network configurations there is a free version as well as a paid version package for the free version would be the main priority from the download page setup update portable in one file license non commercial freeware release date language multilingual for windows xp vista bit
1
684,248
23,412,476,185
IssuesEvent
2022-08-12 19:10:53
rancher/docs
https://api.github.com/repos/rancher/docs
closed
Update AzureAD auth provider setup instructions
[zube]: Working Authentication Air gap install Azure priority/medium Rancher2 blocker release notes team/area1
**Request Summary:** Related to https://github.com/rancher/rancher/issues/29306 Needed for `v2.6.7`. We need to add a new section with instructions on how to set up AzureAD (OAuth) as the external auth provider in Rancher through the new flow via the Microsoft Graph API. The current instructions describe how to set up an app in Azure for the deprecated flow via the Azure AD Graph API. The update warrants two different tabs: 1. The Rancher 2.6.0 - 2.6.6 tab: the current setup instructions will be removed because it is impossible to grant the necessary permissions to apps on the Azure portal, since Microsoft have marked the old Azure AD Graph API as deprecated. They don't allow to use it (it's grayed out in the UI). Instead, the tab will describe the migration process for existing deprecated setups. 1. The Rancher 2.6.7+ tab will have the full instructions on how to set Azure AD with Rancher on both sides (Azure portal and Rancher). There will be new screenshots. This would need a backport for the v2.5.x version of the docs. **Details:** ## New authentication and authorization flow via Microsoft Graph API: - Provide instructions on how to set it up (namely what permissions are needed). Those are application (not delegated) permissions: `Graph.Read.All` and `User.Read.All`. This is the same for new and existing apps that need to be updated on Rancher upgrade and endpoint update. ## Existing (deprecated) authentication and authorization flow via Azure AD Graph API: - It would be good to update the screenshots from the Azure portal, as its UI has changed significantly in the last couple of years. - Explicitly say that Rancher has been changed to use the new Microsoft Graph API instead of the old Azure AD Graph API (to be retired by end of 2022) - Describe the endpoint update process (migration) of deprecated setups (Rancher upgrade scenario) - Show a screenshot of the banner in the UI and the button that performs the endpoint update (which completes the move to the new Microsoft Graph API) - Refer to the table below for the full list of endpoint changes that Rancher performs (admins need not do this manually) - Mention that before admins are ready to press the button and commit to the endpoint update, they must ensure their Azure app has a new set of permissions (the old permissions would no longer be needed) - Include a note on air-gap environments for customers who whitelist endpoints, since the Graph Endpoint URL is changing ## General - Describe the steps to revert the migration (admins must edit the authconfig resource named `azuread` and specify the endpoints according to a table below, as well as remove the `auth.cattle.io/azuread-endpoint-migrated` annotation). - Mention that Rancher does not make assumptions about Custom endpoints, it's on admins to ensure they are properly specified. - Mention how Azure app owners who might want to rotate the Application Secret would need to do the same in Rancher (stored in a Kubernetes secret called `azureadconfig-applicationsecret` in the `cattle-global-data` namespace), since Rancher won't automatically update the Application Secret when it is changed in Azure. - Mention that if admins upgrade to Rancher v2.6.7 with an existing Azure AD setup and choose to disable the auth provider, they won't be able to restore the previous setup and will need to register anew, now with the new auth flow in mind, as there won't be a way to explicitly set up Azure AD the old way, Rancher will use the new Graph API and, therefore, would need to have the proper permissions in the Azure portal. ## Endpoints ### GLOBAL #### Deprecated endpoints Auth Endpoint: https://login.microsoftonline.com/{tenantID}/oauth2/authorize Endpoint: https://login.microsoftonline.com/ Graph Endpoint: https://graph.windows.net/ Token Endpoint: https://login.microsoftonline.com/{tenantID}/oauth2/token #### New endpoints Auth Endpoint: https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/authorize Endpoint: https://login.microsoftonline.com/ Graph Endpoint: https://graph.microsoft.com Token Endpoint: https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/token ### CHINA #### Deprecated endpoints Auth Endpoint: https://login.chinacloudapi.cn/{tenantID}/oauth2/authorize Endpoint: https://login.chinacloudapi.cn/ Graph Endpoint: https://graph.chinacloudapi.cn/ Token Endpoint: https://login.chinacloudapi.cn/{tenantID}/oauth2/token #### New endpoints Auth Endpoint: https://login.partner.microsoftonline.cn/{tenantID}/oauth2/v2.0/authorize Endpoint: https://login.partner.microsoftonline.cn/ Graph Endpoint: https://microsoftgraph.chinacloudapi.cn Token Endpoint: https://login.partner.microsoftonline.cn/{tenantID}/oauth2/v2.0/token
1.0
Update AzureAD auth provider setup instructions - **Request Summary:** Related to https://github.com/rancher/rancher/issues/29306 Needed for `v2.6.7`. We need to add a new section with instructions on how to set up AzureAD (OAuth) as the external auth provider in Rancher through the new flow via the Microsoft Graph API. The current instructions describe how to set up an app in Azure for the deprecated flow via the Azure AD Graph API. The update warrants two different tabs: 1. The Rancher 2.6.0 - 2.6.6 tab: the current setup instructions will be removed because it is impossible to grant the necessary permissions to apps on the Azure portal, since Microsoft have marked the old Azure AD Graph API as deprecated. They don't allow to use it (it's grayed out in the UI). Instead, the tab will describe the migration process for existing deprecated setups. 1. The Rancher 2.6.7+ tab will have the full instructions on how to set Azure AD with Rancher on both sides (Azure portal and Rancher). There will be new screenshots. This would need a backport for the v2.5.x version of the docs. **Details:** ## New authentication and authorization flow via Microsoft Graph API: - Provide instructions on how to set it up (namely what permissions are needed). Those are application (not delegated) permissions: `Graph.Read.All` and `User.Read.All`. This is the same for new and existing apps that need to be updated on Rancher upgrade and endpoint update. ## Existing (deprecated) authentication and authorization flow via Azure AD Graph API: - It would be good to update the screenshots from the Azure portal, as its UI has changed significantly in the last couple of years. - Explicitly say that Rancher has been changed to use the new Microsoft Graph API instead of the old Azure AD Graph API (to be retired by end of 2022) - Describe the endpoint update process (migration) of deprecated setups (Rancher upgrade scenario) - Show a screenshot of the banner in the UI and the button that performs the endpoint update (which completes the move to the new Microsoft Graph API) - Refer to the table below for the full list of endpoint changes that Rancher performs (admins need not do this manually) - Mention that before admins are ready to press the button and commit to the endpoint update, they must ensure their Azure app has a new set of permissions (the old permissions would no longer be needed) - Include a note on air-gap environments for customers who whitelist endpoints, since the Graph Endpoint URL is changing ## General - Describe the steps to revert the migration (admins must edit the authconfig resource named `azuread` and specify the endpoints according to a table below, as well as remove the `auth.cattle.io/azuread-endpoint-migrated` annotation). - Mention that Rancher does not make assumptions about Custom endpoints, it's on admins to ensure they are properly specified. - Mention how Azure app owners who might want to rotate the Application Secret would need to do the same in Rancher (stored in a Kubernetes secret called `azureadconfig-applicationsecret` in the `cattle-global-data` namespace), since Rancher won't automatically update the Application Secret when it is changed in Azure. - Mention that if admins upgrade to Rancher v2.6.7 with an existing Azure AD setup and choose to disable the auth provider, they won't be able to restore the previous setup and will need to register anew, now with the new auth flow in mind, as there won't be a way to explicitly set up Azure AD the old way, Rancher will use the new Graph API and, therefore, would need to have the proper permissions in the Azure portal. ## Endpoints ### GLOBAL #### Deprecated endpoints Auth Endpoint: https://login.microsoftonline.com/{tenantID}/oauth2/authorize Endpoint: https://login.microsoftonline.com/ Graph Endpoint: https://graph.windows.net/ Token Endpoint: https://login.microsoftonline.com/{tenantID}/oauth2/token #### New endpoints Auth Endpoint: https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/authorize Endpoint: https://login.microsoftonline.com/ Graph Endpoint: https://graph.microsoft.com Token Endpoint: https://login.microsoftonline.com/{tenantID}/oauth2/v2.0/token ### CHINA #### Deprecated endpoints Auth Endpoint: https://login.chinacloudapi.cn/{tenantID}/oauth2/authorize Endpoint: https://login.chinacloudapi.cn/ Graph Endpoint: https://graph.chinacloudapi.cn/ Token Endpoint: https://login.chinacloudapi.cn/{tenantID}/oauth2/token #### New endpoints Auth Endpoint: https://login.partner.microsoftonline.cn/{tenantID}/oauth2/v2.0/authorize Endpoint: https://login.partner.microsoftonline.cn/ Graph Endpoint: https://microsoftgraph.chinacloudapi.cn Token Endpoint: https://login.partner.microsoftonline.cn/{tenantID}/oauth2/v2.0/token
non_main
update azuread auth provider setup instructions request summary related to needed for we need to add a new section with instructions on how to set up azuread oauth as the external auth provider in rancher through the new flow via the microsoft graph api the current instructions describe how to set up an app in azure for the deprecated flow via the azure ad graph api the update warrants two different tabs the rancher tab the current setup instructions will be removed because it is impossible to grant the necessary permissions to apps on the azure portal since microsoft have marked the old azure ad graph api as deprecated they don t allow to use it it s grayed out in the ui instead the tab will describe the migration process for existing deprecated setups the rancher tab will have the full instructions on how to set azure ad with rancher on both sides azure portal and rancher there will be new screenshots this would need a backport for the x version of the docs details new authentication and authorization flow via microsoft graph api provide instructions on how to set it up namely what permissions are needed those are application not delegated permissions graph read all and user read all this is the same for new and existing apps that need to be updated on rancher upgrade and endpoint update existing deprecated authentication and authorization flow via azure ad graph api it would be good to update the screenshots from the azure portal as its ui has changed significantly in the last couple of years explicitly say that rancher has been changed to use the new microsoft graph api instead of the old azure ad graph api to be retired by end of describe the endpoint update process migration of deprecated setups rancher upgrade scenario show a screenshot of the banner in the ui and the button that performs the endpoint update which completes the move to the new microsoft graph api refer to the table below for the full list of endpoint changes that rancher performs admins need not do this manually mention that before admins are ready to press the button and commit to the endpoint update they must ensure their azure app has a new set of permissions the old permissions would no longer be needed include a note on air gap environments for customers who whitelist endpoints since the graph endpoint url is changing general describe the steps to revert the migration admins must edit the authconfig resource named azuread and specify the endpoints according to a table below as well as remove the auth cattle io azuread endpoint migrated annotation mention that rancher does not make assumptions about custom endpoints it s on admins to ensure they are properly specified mention how azure app owners who might want to rotate the application secret would need to do the same in rancher stored in a kubernetes secret called azureadconfig applicationsecret in the cattle global data namespace since rancher won t automatically update the application secret when it is changed in azure mention that if admins upgrade to rancher with an existing azure ad setup and choose to disable the auth provider they won t be able to restore the previous setup and will need to register anew now with the new auth flow in mind as there won t be a way to explicitly set up azure ad the old way rancher will use the new graph api and therefore would need to have the proper permissions in the azure portal endpoints global deprecated endpoints auth endpoint endpoint graph endpoint token endpoint new endpoints auth endpoint endpoint graph endpoint token endpoint china deprecated endpoints auth endpoint endpoint graph endpoint token endpoint new endpoints auth endpoint endpoint graph endpoint token endpoint
0
291,803
21,940,553,207
IssuesEvent
2022-05-23 17:37:26
pharmaverse/admiral
https://api.github.com/repos/pharmaverse/admiral
closed
Documentation: Update documentation of derive_var_trtredtm
documentation
### Please select a category the issue is focused on? Function documentation ### Let us know where something needs a refresh or put your idea here! Update details section of `derive_var_trtedtm()`. It should also mention the imputation done internally.
1.0
Documentation: Update documentation of derive_var_trtredtm - ### Please select a category the issue is focused on? Function documentation ### Let us know where something needs a refresh or put your idea here! Update details section of `derive_var_trtedtm()`. It should also mention the imputation done internally.
non_main
documentation update documentation of derive var trtredtm please select a category the issue is focused on function documentation let us know where something needs a refresh or put your idea here update details section of derive var trtedtm it should also mention the imputation done internally
0
268,051
28,565,736,983
IssuesEvent
2023-04-21 01:50:55
turkdevops/node
https://api.github.com/repos/turkdevops/node
closed
CVE-2022-40764 (High) detected in snyk-1.437.3.tgz - autoclosed
Mend: dependency security vulnerability
## CVE-2022-40764 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snyk-1.437.3.tgz</b></p></summary> <p>snyk library and cli utility</p> <p>Library home page: <a href="https://registry.npmjs.org/snyk/-/snyk-1.437.3.tgz">https://registry.npmjs.org/snyk/-/snyk-1.437.3.tgz</a></p> <p> Dependency Hierarchy: - :x: **snyk-1.437.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/node/commit/ff9854eebb369e48ecc229d0b1f8dbcb60bbf23f">ff9854eebb369e48ecc229d0b1f8dbcb60bbf23f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Snyk CLI before 1.996.0 allows arbitrary command execution, affecting Snyk IDE plugins and the snyk npm package. Exploitation could follow from the common practice of viewing untrusted files in the Visual Studio Code editor, for example. The original demonstration was with shell metacharacters in the vendor.json ignore field, affecting snyk-go-plugin before 1.19.1. This affects, for example, the Snyk TeamCity plugin (which does not update automatically) before 20220930.142957. <p>Publish Date: 2022-10-03 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40764>CVE-2022-40764</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-hpqj-7cj6-hfj8">https://github.com/advisories/GHSA-hpqj-7cj6-hfj8</a></p> <p>Release Date: 2022-10-03</p> <p>Fix Resolution: 1.996.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-40764 (High) detected in snyk-1.437.3.tgz - autoclosed - ## CVE-2022-40764 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snyk-1.437.3.tgz</b></p></summary> <p>snyk library and cli utility</p> <p>Library home page: <a href="https://registry.npmjs.org/snyk/-/snyk-1.437.3.tgz">https://registry.npmjs.org/snyk/-/snyk-1.437.3.tgz</a></p> <p> Dependency Hierarchy: - :x: **snyk-1.437.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/node/commit/ff9854eebb369e48ecc229d0b1f8dbcb60bbf23f">ff9854eebb369e48ecc229d0b1f8dbcb60bbf23f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> Snyk CLI before 1.996.0 allows arbitrary command execution, affecting Snyk IDE plugins and the snyk npm package. Exploitation could follow from the common practice of viewing untrusted files in the Visual Studio Code editor, for example. The original demonstration was with shell metacharacters in the vendor.json ignore field, affecting snyk-go-plugin before 1.19.1. This affects, for example, the Snyk TeamCity plugin (which does not update automatically) before 20220930.142957. <p>Publish Date: 2022-10-03 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40764>CVE-2022-40764</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-hpqj-7cj6-hfj8">https://github.com/advisories/GHSA-hpqj-7cj6-hfj8</a></p> <p>Release Date: 2022-10-03</p> <p>Fix Resolution: 1.996.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in snyk tgz autoclosed cve high severity vulnerability vulnerable library snyk tgz snyk library and cli utility library home page a href dependency hierarchy x snyk tgz vulnerable library found in head commit a href found in base branch master vulnerability details snyk cli before allows arbitrary command execution affecting snyk ide plugins and the snyk npm package exploitation could follow from the common practice of viewing untrusted files in the visual studio code editor for example the original demonstration was with shell metacharacters in the vendor json ignore field affecting snyk go plugin before this affects for example the snyk teamcity plugin which does not update automatically before publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
4,843
24,962,901,929
IssuesEvent
2022-11-01 16:55:06
bazelbuild/intellij
https://api.github.com/repos/bazelbuild/intellij
closed
Nested Starlark function (closure) incorrectly highlighted as syntax error
type: bug product: IntelliJ lang: starlark awaiting-maintainer
### Description of the bug: Despite being valid Starlark, the following code snippet results in a syntax error shown: <img width="288" alt="Screen Shot 2022-10-19 at 11 40 59 AM" src="https://user-images.githubusercontent.com/123678/196776842-c64ed543-86ec-4a21-9d2e-59f82729cd77.png"> ### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. Put the following code in a .bzl file and edit with IntelliJ: ```python def foo(): def bar(): pass ``` ### Which Intellij IDE are you using? Please provide the specific version. 2022.2.3 Build IU-222.4345.14 ### What programming languages and tools are you using? Please provide specific versions. Starlark ### What Bazel plugin version are you using? 2022.09.20.0.1-api-version-222 ### Have you found anything relevant by searching the web? _No response_ ### Any other information, logs, or outputs that you want to share? _No response_
True
Nested Starlark function (closure) incorrectly highlighted as syntax error - ### Description of the bug: Despite being valid Starlark, the following code snippet results in a syntax error shown: <img width="288" alt="Screen Shot 2022-10-19 at 11 40 59 AM" src="https://user-images.githubusercontent.com/123678/196776842-c64ed543-86ec-4a21-9d2e-59f82729cd77.png"> ### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible. Put the following code in a .bzl file and edit with IntelliJ: ```python def foo(): def bar(): pass ``` ### Which Intellij IDE are you using? Please provide the specific version. 2022.2.3 Build IU-222.4345.14 ### What programming languages and tools are you using? Please provide specific versions. Starlark ### What Bazel plugin version are you using? 2022.09.20.0.1-api-version-222 ### Have you found anything relevant by searching the web? _No response_ ### Any other information, logs, or outputs that you want to share? _No response_
main
nested starlark function closure incorrectly highlighted as syntax error description of the bug despite being valid starlark the following code snippet results in a syntax error shown img width alt screen shot at am src what s the simplest easiest way to reproduce this bug please provide a minimal example if possible put the following code in a bzl file and edit with intellij python def foo def bar pass which intellij ide are you using please provide the specific version build iu what programming languages and tools are you using please provide specific versions starlark what bazel plugin version are you using api version have you found anything relevant by searching the web no response any other information logs or outputs that you want to share no response
1
1,888
6,577,527,569
IssuesEvent
2017-09-12 01:32:16
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Docker pull: always and state: reloaded redeploys named containers every time
affects_1.9 bug_report cloud docker waiting_on_maintainer
##### Issue Type: - Bug Report ##### Plugin Name: Docker ##### Ansible Version: Doesn't work in 1.9.1 to 2.0.0.2 ##### Ansible Configuration: ``` [defaults] host_key_checking=False display_skipped_hosts=False force_handlers = True hostfile = inventory/ec2.py retry_files_enabled = False [ssh_connection] pipelining=True ``` ##### Environment: Ubuntu 14.04 from OSX 10.10 ##### Summary: I'm not sure if it ever worked, or if I just imagined it worked, but according to the documentation: `"reloaded" (added in Ansible 1.9) asserts that all matching containers are running and restarts any that have any images or configuration out of date.` This does not seem to be the case, as named containers that have nothing changed, will be reloaded every time. I'm almost positive this was properly working at some point so if someone could try it out to see if maybe it's just something with my setup, that would be great :smile: ##### Steps To Reproduce: ``` yaml - name: create redis container docker: name: redis-test image: "redis:3.0.3" pull: always state: reloaded ``` ##### Expected Results: When a container already exists and it has all the same settings except the dynamically assigned name is different, nothing should happen: **First run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create named redis container] ********************* changed: [x.x.x.x] ``` **Second run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create named redis container] ********************* ok: [x.x.x.x] ``` ##### Actual Results: It will create a new, separate container every time: **First run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create redis container] ********************* changed: [x.x.x.x] ``` **Second run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create redis container] ********************* changed: [x.x.x.x] ``` **Debug output (v1.9.1):** ``` yaml changed: [54.209.183.233] => { "ansible_facts": { "docker_containers": [{ "Id": "ac0311feb2ab0f24de62003197bfe327ee3a46bee416107ccf6df28561b4a50e", "Warnings": null }] }, "changed": true, "containers": [{ "Id": "ac0311feb2ab0f24de62003197bfe327ee3a46bee416107ccf6df28561b4a50e", "Warnings": null }], "msg": "started 1 container, created 1 container.", "reload_reasons": null, <<<---- HUH? if there are no reasons to reload, then why does it happen? "summary": { "created": 1, "killed": 0, "pulled": 0, "removed": 0, "restarted": 0, "started": 1, "stopped": 0 } } ``` **Debug output (v2.0.0.2):** `"reload_reasons": "net (default => bridge)"` Weird... looking into it This seems to be very similar to https://github.com/ansible/ansible-modules-core/issues/3219, but it is NOT fixed by removing the relevant commit for that issue
True
Docker pull: always and state: reloaded redeploys named containers every time - ##### Issue Type: - Bug Report ##### Plugin Name: Docker ##### Ansible Version: Doesn't work in 1.9.1 to 2.0.0.2 ##### Ansible Configuration: ``` [defaults] host_key_checking=False display_skipped_hosts=False force_handlers = True hostfile = inventory/ec2.py retry_files_enabled = False [ssh_connection] pipelining=True ``` ##### Environment: Ubuntu 14.04 from OSX 10.10 ##### Summary: I'm not sure if it ever worked, or if I just imagined it worked, but according to the documentation: `"reloaded" (added in Ansible 1.9) asserts that all matching containers are running and restarts any that have any images or configuration out of date.` This does not seem to be the case, as named containers that have nothing changed, will be reloaded every time. I'm almost positive this was properly working at some point so if someone could try it out to see if maybe it's just something with my setup, that would be great :smile: ##### Steps To Reproduce: ``` yaml - name: create redis container docker: name: redis-test image: "redis:3.0.3" pull: always state: reloaded ``` ##### Expected Results: When a container already exists and it has all the same settings except the dynamically assigned name is different, nothing should happen: **First run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create named redis container] ********************* changed: [x.x.x.x] ``` **Second run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create named redis container] ********************* ok: [x.x.x.x] ``` ##### Actual Results: It will create a new, separate container every time: **First run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create redis container] ********************* changed: [x.x.x.x] ``` **Second run:** ``` GATHERING FACTS *************************************************************** ok: [x.x.x.x] TASK: [docker | create redis container] ********************* changed: [x.x.x.x] ``` **Debug output (v1.9.1):** ``` yaml changed: [54.209.183.233] => { "ansible_facts": { "docker_containers": [{ "Id": "ac0311feb2ab0f24de62003197bfe327ee3a46bee416107ccf6df28561b4a50e", "Warnings": null }] }, "changed": true, "containers": [{ "Id": "ac0311feb2ab0f24de62003197bfe327ee3a46bee416107ccf6df28561b4a50e", "Warnings": null }], "msg": "started 1 container, created 1 container.", "reload_reasons": null, <<<---- HUH? if there are no reasons to reload, then why does it happen? "summary": { "created": 1, "killed": 0, "pulled": 0, "removed": 0, "restarted": 0, "started": 1, "stopped": 0 } } ``` **Debug output (v2.0.0.2):** `"reload_reasons": "net (default => bridge)"` Weird... looking into it This seems to be very similar to https://github.com/ansible/ansible-modules-core/issues/3219, but it is NOT fixed by removing the relevant commit for that issue
main
docker pull always and state reloaded redeploys named containers every time issue type bug report plugin name docker ansible version doesn t work in to ansible configuration host key checking false display skipped hosts false force handlers true hostfile inventory py retry files enabled false pipelining true environment ubuntu from osx summary i m not sure if it ever worked or if i just imagined it worked but according to the documentation reloaded added in ansible asserts that all matching containers are running and restarts any that have any images or configuration out of date this does not seem to be the case as named containers that have nothing changed will be reloaded every time i m almost positive this was properly working at some point so if someone could try it out to see if maybe it s just something with my setup that would be great smile steps to reproduce yaml name create redis container docker name redis test image redis pull always state reloaded expected results when a container already exists and it has all the same settings except the dynamically assigned name is different nothing should happen first run gathering facts ok task changed second run gathering facts ok task ok actual results it will create a new separate container every time first run gathering facts ok task changed second run gathering facts ok task changed debug output yaml changed ansible facts docker containers id warnings null changed true containers id warnings null msg started container created container reload reasons null huh if there are no reasons to reload then why does it happen summary created killed pulled removed restarted started stopped debug output reload reasons net default bridge weird looking into it this seems to be very similar to but it is not fixed by removing the relevant commit for that issue
1
3,642
14,740,297,590
IssuesEvent
2021-01-07 08:51:38
KhronosGroup/SPIRV-Cross
https://api.github.com/repos/KhronosGroup/SPIRV-Cross
closed
OpIgnoreIntersectionKHR is not supported
enhancement out-of-office-maintainer
Hi, The value of OpIgnoreIntersectionKHR changed from 5335 to 4448 (https://github.com/KhronosGroup/SPIRV-Registry/blob/master/extensions/KHR/SPV_KHR_ray_tracing.asciidoc) and SPIRV-Cross didn't reflect this change. It's currently crashing when such an opcode is present. Best regards, Pascal
True
OpIgnoreIntersectionKHR is not supported - Hi, The value of OpIgnoreIntersectionKHR changed from 5335 to 4448 (https://github.com/KhronosGroup/SPIRV-Registry/blob/master/extensions/KHR/SPV_KHR_ray_tracing.asciidoc) and SPIRV-Cross didn't reflect this change. It's currently crashing when such an opcode is present. Best regards, Pascal
main
opignoreintersectionkhr is not supported hi the value of opignoreintersectionkhr changed from to and spirv cross didn t reflect this change it s currently crashing when such an opcode is present best regards pascal
1
337,549
24,544,786,220
IssuesEvent
2022-10-12 07:57:12
utkarsh006/Eazy-Android
https://api.github.com/repos/utkarsh006/Eazy-Android
closed
Is there a better way to structure the project readme?
documentation good first issue hacktoberfest
When looking at the readme, there is a lot going on even though there's not much on the page, maybe due to all the labels, the TOC, the images, and how information is structured. I think this can be improved.
1.0
Is there a better way to structure the project readme? - When looking at the readme, there is a lot going on even though there's not much on the page, maybe due to all the labels, the TOC, the images, and how information is structured. I think this can be improved.
non_main
is there a better way to structure the project readme when looking at the readme there is a lot going on even though there s not much on the page maybe due to all the labels the toc the images and how information is structured i think this can be improved
0
4,055
18,956,230,221
IssuesEvent
2021-11-18 20:37:03
svengreb/tmpl
https://api.github.com/repos/svengreb/tmpl
closed
Optimize GitHub action workflow scope
type-improvement context-workflow scope-configurability scope-maintainability target-base
Currently all jobs are summarized in the [`ci` workflow][1] but not separated by their scope, e.g. only Node specific tasks. The workflow is also not optimized to only run when specific files have been changed which results in false-positive executions and wastes limited free tier and developer time. Therefore the `ci` workflow will be optimized. ## CI Node A new `ci-node` workflow will… - only run when any `*.js`, `*.json`, `*.md`, `*.yaml` and `*.yml` file has been modified. This matches the [lint-staged][2], Prettier and remark configurations. See the extensive [GitHub action documentations about `on.<push|pull_request>.paths`][4] and the [filter pattern cheat sheet][5] for more details. - only run for `ubuntu-latest` instead of a matrix with `macos-latest` and `windows-latest` since there is no platform specific code yet. - use cache `npm` dependencies which is [possible as of `actions/setup-node@v2.2.0`][3]. [1]: https://github.com/svengreb/tmpl/blob/0bb40e35/.github/workflows/ci.yml [2]: https://github.com/svengreb/tmpl/blob/0bb40e35/lint-staged.config.js#L12-L13 [3]: https://github.com/actions/setup-node/releases/tag/v2.2.0 [4]: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#onpushpull_requestpaths [5]: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#filter-pattern-cheat-sheet
True
Optimize GitHub action workflow scope - Currently all jobs are summarized in the [`ci` workflow][1] but not separated by their scope, e.g. only Node specific tasks. The workflow is also not optimized to only run when specific files have been changed which results in false-positive executions and wastes limited free tier and developer time. Therefore the `ci` workflow will be optimized. ## CI Node A new `ci-node` workflow will… - only run when any `*.js`, `*.json`, `*.md`, `*.yaml` and `*.yml` file has been modified. This matches the [lint-staged][2], Prettier and remark configurations. See the extensive [GitHub action documentations about `on.<push|pull_request>.paths`][4] and the [filter pattern cheat sheet][5] for more details. - only run for `ubuntu-latest` instead of a matrix with `macos-latest` and `windows-latest` since there is no platform specific code yet. - use cache `npm` dependencies which is [possible as of `actions/setup-node@v2.2.0`][3]. [1]: https://github.com/svengreb/tmpl/blob/0bb40e35/.github/workflows/ci.yml [2]: https://github.com/svengreb/tmpl/blob/0bb40e35/lint-staged.config.js#L12-L13 [3]: https://github.com/actions/setup-node/releases/tag/v2.2.0 [4]: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#onpushpull_requestpaths [5]: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#filter-pattern-cheat-sheet
main
optimize github action workflow scope currently all jobs are summarized in the but not separated by their scope e g only node specific tasks the workflow is also not optimized to only run when specific files have been changed which results in false positive executions and wastes limited free tier and developer time therefore the ci workflow will be optimized ci node a new ci node workflow will… only run when any js json md yaml and yml file has been modified this matches the prettier and remark configurations see the extensive and the for more details only run for ubuntu latest instead of a matrix with macos latest and windows latest since there is no platform specific code yet use cache npm dependencies which is
1
72,561
15,238,232,378
IssuesEvent
2021-02-19 01:25:26
LevyForchh/yugabyte-db
https://api.github.com/repos/LevyForchh/yugabyte-db
opened
CVE-2021-20190 (High) detected in jackson-databind-2.9.9.jar, jackson-databind-2.8.11.4.jar
security vulnerability
## CVE-2021-20190 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.9.jar</b>, <b>jackson-databind-2.8.11.4.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.9.9.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.9.9.jar</p> <p> Dependency Hierarchy: - pac4j-oidc-3.7.0.jar (Root Library) - :x: **jackson-databind-2.9.9.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.8.11.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.11.4.jar</p> <p> Dependency Hierarchy: - play-json_2.11-2.6.14.jar (Root Library) - :x: **jackson-databind-2.8.11.4.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/LevyForchh/yugabyte-db/commit/d5a0ed9bff63893a5435e09333d22846f6bb3acc">d5a0ed9bff63893a5435e09333d22846f6bb3acc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in jackson-databind before 2.9.10.7. FasterXML mishandles the interaction between serialization gadgets and typing. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability. <p>Publish Date: 2021-01-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20190>CVE-2021-20190</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2854">https://github.com/FasterXML/jackson-databind/issues/2854</a></p> <p>Release Date: 2021-01-19</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind-2.9.10.7</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"org.pac4j:pac4j-oidc:3.7.0;com.fasterxml.jackson.core:jackson-databind:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind-2.9.10.7"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.11.4","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"com.typesafe.play:play-json_2.11:2.6.14;com.fasterxml.jackson.core:jackson-databind:2.8.11.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind-2.9.10.7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-20190","vulnerabilityDetails":"A flaw was found in jackson-databind before 2.9.10.7. FasterXML mishandles the interaction between serialization gadgets and typing. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20190","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-20190 (High) detected in jackson-databind-2.9.9.jar, jackson-databind-2.8.11.4.jar - ## CVE-2021-20190 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.9.jar</b>, <b>jackson-databind-2.8.11.4.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.9.9.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.9.9.jar</p> <p> Dependency Hierarchy: - pac4j-oidc-3.7.0.jar (Root Library) - :x: **jackson-databind-2.9.9.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.8.11.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.11.4.jar</p> <p> Dependency Hierarchy: - play-json_2.11-2.6.14.jar (Root Library) - :x: **jackson-databind-2.8.11.4.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/LevyForchh/yugabyte-db/commit/d5a0ed9bff63893a5435e09333d22846f6bb3acc">d5a0ed9bff63893a5435e09333d22846f6bb3acc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw was found in jackson-databind before 2.9.10.7. FasterXML mishandles the interaction between serialization gadgets and typing. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability. <p>Publish Date: 2021-01-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20190>CVE-2021-20190</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2854">https://github.com/FasterXML/jackson-databind/issues/2854</a></p> <p>Release Date: 2021-01-19</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind-2.9.10.7</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.9","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"org.pac4j:pac4j-oidc:3.7.0;com.fasterxml.jackson.core:jackson-databind:2.9.9","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind-2.9.10.7"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.11.4","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"com.typesafe.play:play-json_2.11:2.6.14;com.fasterxml.jackson.core:jackson-databind:2.8.11.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind-2.9.10.7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-20190","vulnerabilityDetails":"A flaw was found in jackson-databind before 2.9.10.7. FasterXML mishandles the interaction between serialization gadgets and typing. The highest threat from this vulnerability is to data confidentiality and integrity as well as system availability.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20190","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_main
cve high detected in jackson databind jar jackson databind jar cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy oidc jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy play json jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was found in jackson databind before fasterxml mishandles the interaction between serialization gadgets and typing the highest threat from this vulnerability is to data confidentiality and integrity as well as system availability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org oidc com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree com typesafe play play json com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails a flaw was found in jackson databind before fasterxml mishandles the interaction between serialization gadgets and typing the highest threat from this vulnerability is to data confidentiality and integrity as well as system availability vulnerabilityurl
0
95,841
3,961,077,463
IssuesEvent
2016-05-02 10:54:57
pymedusa/SickRage
https://api.github.com/repos/pymedusa/SickRage
opened
[ FEATURE ] Real SSL certs
Feature Request Priority: 3. Low
Using a method similar to Plex we should be able to setup some kind of server that would allow us to issue let's encrypt certs for `*.*.pymedusa.com` where the first section is an ip and the second being a hash the client's server has. The reason for the hash is to make sure that the private cert we hand the client's server would only be able to be used for a single session. If their IP changes they won't be able to forge a connection even if someone else is using their old IP. For example `127-0-0-1.askdmlkasdmklasmdl.pymedusa.com`. Another way would be to have a form in Sickrage where the user can choose an avalible subdomain, we then connect to the server and issue a SSL cert for `*.clients.pymedusa.com` where the `*` is their chosen subdomain. Both of these would require the user to send their IP to our server. Do keep in mind a IP address doesn't reveal anything more than the fact that they're using our software and/or someone has used their IP when asking for a cert to be issued.
1.0
[ FEATURE ] Real SSL certs - Using a method similar to Plex we should be able to setup some kind of server that would allow us to issue let's encrypt certs for `*.*.pymedusa.com` where the first section is an ip and the second being a hash the client's server has. The reason for the hash is to make sure that the private cert we hand the client's server would only be able to be used for a single session. If their IP changes they won't be able to forge a connection even if someone else is using their old IP. For example `127-0-0-1.askdmlkasdmklasmdl.pymedusa.com`. Another way would be to have a form in Sickrage where the user can choose an avalible subdomain, we then connect to the server and issue a SSL cert for `*.clients.pymedusa.com` where the `*` is their chosen subdomain. Both of these would require the user to send their IP to our server. Do keep in mind a IP address doesn't reveal anything more than the fact that they're using our software and/or someone has used their IP when asking for a cert to be issued.
non_main
real ssl certs using a method similar to plex we should be able to setup some kind of server that would allow us to issue let s encrypt certs for pymedusa com where the first section is an ip and the second being a hash the client s server has the reason for the hash is to make sure that the private cert we hand the client s server would only be able to be used for a single session if their ip changes they won t be able to forge a connection even if someone else is using their old ip for example askdmlkasdmklasmdl pymedusa com another way would be to have a form in sickrage where the user can choose an avalible subdomain we then connect to the server and issue a ssl cert for clients pymedusa com where the is their chosen subdomain both of these would require the user to send their ip to our server do keep in mind a ip address doesn t reveal anything more than the fact that they re using our software and or someone has used their ip when asking for a cert to be issued
0
73,189
14,006,844,541
IssuesEvent
2020-10-28 20:36:15
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
[4.0] FontAwesome is loaded twice
No Code Attached Yet
### What needs to be fixed For some reason FontAwesome is still stuffed in main template files but it should be (and is) loaded from media/vendor directory. ### Why this should be fixed Bad code, bad frontend performance. ### How would you fix it Remove from `template.css`. ### Side Effects expected No.
1.0
[4.0] FontAwesome is loaded twice - ### What needs to be fixed For some reason FontAwesome is still stuffed in main template files but it should be (and is) loaded from media/vendor directory. ### Why this should be fixed Bad code, bad frontend performance. ### How would you fix it Remove from `template.css`. ### Side Effects expected No.
non_main
fontawesome is loaded twice what needs to be fixed for some reason fontawesome is still stuffed in main template files but it should be and is loaded from media vendor directory why this should be fixed bad code bad frontend performance how would you fix it remove from template css side effects expected no
0
5,667
29,489,534,002
IssuesEvent
2023-06-02 12:28:38
coq-community/manifesto
https://api.github.com/repos/coq-community/manifesto
opened
Proposal to move CoqIDE to Coq-community
move-project maintainer-wanted
**Project name:** CoqIDE **Initial author(s):** The Coq development team, INRIA, CNRS, and contributors **Current URL:** https://github.com/coq/coq **Kind:** Integrated Development Environment (IDE) for Coq **License:** [LGPL-2.1-only](https://spdx.org/licenses/LGPL-2.1-only.html) **Description:** [CoqIDE](https://coq.inria.fr/refman/practical-tools/coqide.html) is an IDE implemented using the OCaml programming language and the GTK3 widget toolkit for graphical user interfaces (GUIs), thanks to the [lablgtk3](https://opam.ocaml.org/packages/lablgtk3/) OCaml package. CoqIDE uses a [legacy XML-based protocol](https://github.com/coq/coq/blob/master/dev/doc/xml-protocol.md) to communicate with Coq and is licensed under the open source [LGPL-2.1 license](https://spdx.org/licenses/LGPL-2.1-only.html). **Status:** CoqIDE's source code is currently part of Coq's [GitHub repository](https://github.com/coq/coq). Due to a desire to shift IDE-related work toward [LSP](https://microsoft.github.io/language-server-protocol/) and [VS Code](https://code.visualstudio.com/) support, the Coq core team no longer considers CoqIDE maintenance and evolution a priority. With this proposal issue, the team wants to give an opportunity to the Coq community to take over CoqIDE maintenance and lead its future evolution. More details about the context and the plans for the future of IDEs for Coq can be found in [Coq CEP 68 leading to this proposal issue](https://github.com/coq/ceps/blob/master/text/068-coqide-split.md). **New maintainer:** looking for volunteers To volunteer, please respond to this GitHub issue with a brief motivation and summary of relevant experience for becoming a CoqIDE maintainer. As part of their reply to this issue, volunteers are encouraged to briefly present their short-term and long-term plans for CoqIDE and how long they think they will remain active on CoqIDE maintenance. However, this won't be considered as a commitment, as plans and priorities can evolve based on the context and personal circumstances. The maintainer(s) will be selected from the issue responders by the [Coq core team](https://coq.inria.fr/coq-team.html) and Coq-community [organization owners](https://github.com/coq-community/manifesto#process--organizational-aspects). Responders not selected will still be encouraged to contribute to CoqIDE in collaboration with the new maintainer(s) and other contributors. Anyone else planning to get involved as an active and regular contributor to the CoqIDE project is also welcome to make themselves known in this GitHub issue and to briefly present which improvements and changes they plan to propose.
True
Proposal to move CoqIDE to Coq-community - **Project name:** CoqIDE **Initial author(s):** The Coq development team, INRIA, CNRS, and contributors **Current URL:** https://github.com/coq/coq **Kind:** Integrated Development Environment (IDE) for Coq **License:** [LGPL-2.1-only](https://spdx.org/licenses/LGPL-2.1-only.html) **Description:** [CoqIDE](https://coq.inria.fr/refman/practical-tools/coqide.html) is an IDE implemented using the OCaml programming language and the GTK3 widget toolkit for graphical user interfaces (GUIs), thanks to the [lablgtk3](https://opam.ocaml.org/packages/lablgtk3/) OCaml package. CoqIDE uses a [legacy XML-based protocol](https://github.com/coq/coq/blob/master/dev/doc/xml-protocol.md) to communicate with Coq and is licensed under the open source [LGPL-2.1 license](https://spdx.org/licenses/LGPL-2.1-only.html). **Status:** CoqIDE's source code is currently part of Coq's [GitHub repository](https://github.com/coq/coq). Due to a desire to shift IDE-related work toward [LSP](https://microsoft.github.io/language-server-protocol/) and [VS Code](https://code.visualstudio.com/) support, the Coq core team no longer considers CoqIDE maintenance and evolution a priority. With this proposal issue, the team wants to give an opportunity to the Coq community to take over CoqIDE maintenance and lead its future evolution. More details about the context and the plans for the future of IDEs for Coq can be found in [Coq CEP 68 leading to this proposal issue](https://github.com/coq/ceps/blob/master/text/068-coqide-split.md). **New maintainer:** looking for volunteers To volunteer, please respond to this GitHub issue with a brief motivation and summary of relevant experience for becoming a CoqIDE maintainer. As part of their reply to this issue, volunteers are encouraged to briefly present their short-term and long-term plans for CoqIDE and how long they think they will remain active on CoqIDE maintenance. However, this won't be considered as a commitment, as plans and priorities can evolve based on the context and personal circumstances. The maintainer(s) will be selected from the issue responders by the [Coq core team](https://coq.inria.fr/coq-team.html) and Coq-community [organization owners](https://github.com/coq-community/manifesto#process--organizational-aspects). Responders not selected will still be encouraged to contribute to CoqIDE in collaboration with the new maintainer(s) and other contributors. Anyone else planning to get involved as an active and regular contributor to the CoqIDE project is also welcome to make themselves known in this GitHub issue and to briefly present which improvements and changes they plan to propose.
main
proposal to move coqide to coq community project name coqide initial author s the coq development team inria cnrs and contributors current url kind integrated development environment ide for coq license description is an ide implemented using the ocaml programming language and the widget toolkit for graphical user interfaces guis thanks to the ocaml package coqide uses a to communicate with coq and is licensed under the open source status coqide s source code is currently part of coq s due to a desire to shift ide related work toward and support the coq core team no longer considers coqide maintenance and evolution a priority with this proposal issue the team wants to give an opportunity to the coq community to take over coqide maintenance and lead its future evolution more details about the context and the plans for the future of ides for coq can be found in new maintainer looking for volunteers to volunteer please respond to this github issue with a brief motivation and summary of relevant experience for becoming a coqide maintainer as part of their reply to this issue volunteers are encouraged to briefly present their short term and long term plans for coqide and how long they think they will remain active on coqide maintenance however this won t be considered as a commitment as plans and priorities can evolve based on the context and personal circumstances the maintainer s will be selected from the issue responders by the and coq community responders not selected will still be encouraged to contribute to coqide in collaboration with the new maintainer s and other contributors anyone else planning to get involved as an active and regular contributor to the coqide project is also welcome to make themselves known in this github issue and to briefly present which improvements and changes they plan to propose
1
2,114
7,194,340,112
IssuesEvent
2018-02-04 03:16:42
Aqours/Aqours.github.io
https://api.github.com/repos/Aqours/Aqours.github.io
closed
Windows 绿色程序
Software Unmaintained
## Classical - Aegisub - Beyond Compare - CPU-Z - CrystalDiskInfo - DiskGenius - DNS Jumper - EnableLoopbackUtility - FastStone Capture - FastStone MaxView - GifCam - GPU-Z - HWMonitor - LocaleEmulator - MKVToolNix - MPC-HC - Nexus Files - Nginx - OBS Studio - Pazera Free Audio Extractor - PuTTYgen - Rufus - Shadowsocks - Snipaste - SwitchHosts! - Tor Browser - Universal Extractor - WinSCP (_+winscppwd.exe_) - XMedia Recode
True
Windows 绿色程序 - ## Classical - Aegisub - Beyond Compare - CPU-Z - CrystalDiskInfo - DiskGenius - DNS Jumper - EnableLoopbackUtility - FastStone Capture - FastStone MaxView - GifCam - GPU-Z - HWMonitor - LocaleEmulator - MKVToolNix - MPC-HC - Nexus Files - Nginx - OBS Studio - Pazera Free Audio Extractor - PuTTYgen - Rufus - Shadowsocks - Snipaste - SwitchHosts! - Tor Browser - Universal Extractor - WinSCP (_+winscppwd.exe_) - XMedia Recode
main
windows 绿色程序 classical aegisub beyond compare cpu z crystaldiskinfo diskgenius dns jumper enableloopbackutility faststone capture faststone maxview gifcam gpu z hwmonitor localeemulator mkvtoolnix mpc hc nexus files nginx obs studio pazera free audio extractor puttygen rufus shadowsocks snipaste switchhosts tor browser universal extractor winscp winscppwd exe xmedia recode
1
2,962
10,616,920,473
IssuesEvent
2019-10-12 15:21:41
vostpt/mobile-app
https://api.github.com/repos/vostpt/mobile-app
closed
Report Problem Screen
Needs Maintainers Help good first issue hacktoberfest
**Description** Create Problems Screen **File Location** ``` - presentation |__ ui ``` **Requirements** - Screen with text information about how to report a problem - Appbar must have a ""back"" button **UI** <img width="365" alt="imagem" src="https://user-images.githubusercontent.com/10728633/63040257-ff6d6800-bebc-11e9-9e8a-25b6639cbf8d.png"> **NOTES** Assume the following text: ``` Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. In iaculis nunc sed augue. A scelerisque purus semper eget duis at tellus at urna. Quisque sagittis purus sit amet volutpat. Maecenas volutpat blandit aliquam etiam. Eu facilisis sed odio morbi quis commodo odio. Aliquet risus feugiat in ante metus. Nec ullamcorper sit amet risus. Libero id faucibus nisl tincidunt eget nullam. Non consectetur a erat nam at. Cursus eget nunc scelerisque viverra mauris in aliquam sem. At tempor commodo ullamcorper a lacus vestibulum sed. Urna molestie at elementum eu facilisis sed. Fermentum dui faucibus in ornare. Arcu vitae elementum curabitur vitae nunc sed velit dignissim. Nunc faucibus a pellentesque sit amet porttitor eget. Nunc sed velit dignissim sodales ut eu sem. Interdum posuere lorem ipsum dolor. Eu mi bibendum neque egestas congue quisque. Convallis convallis tellus id interdum velit laoreet id donec. email@email.com ``` The email should be clickable and it should open a new e-mail for `email@email.com`.
True
Report Problem Screen - **Description** Create Problems Screen **File Location** ``` - presentation |__ ui ``` **Requirements** - Screen with text information about how to report a problem - Appbar must have a ""back"" button **UI** <img width="365" alt="imagem" src="https://user-images.githubusercontent.com/10728633/63040257-ff6d6800-bebc-11e9-9e8a-25b6639cbf8d.png"> **NOTES** Assume the following text: ``` Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. In iaculis nunc sed augue. A scelerisque purus semper eget duis at tellus at urna. Quisque sagittis purus sit amet volutpat. Maecenas volutpat blandit aliquam etiam. Eu facilisis sed odio morbi quis commodo odio. Aliquet risus feugiat in ante metus. Nec ullamcorper sit amet risus. Libero id faucibus nisl tincidunt eget nullam. Non consectetur a erat nam at. Cursus eget nunc scelerisque viverra mauris in aliquam sem. At tempor commodo ullamcorper a lacus vestibulum sed. Urna molestie at elementum eu facilisis sed. Fermentum dui faucibus in ornare. Arcu vitae elementum curabitur vitae nunc sed velit dignissim. Nunc faucibus a pellentesque sit amet porttitor eget. Nunc sed velit dignissim sodales ut eu sem. Interdum posuere lorem ipsum dolor. Eu mi bibendum neque egestas congue quisque. Convallis convallis tellus id interdum velit laoreet id donec. email@email.com ``` The email should be clickable and it should open a new e-mail for `email@email.com`.
main
report problem screen description create problems screen file location presentation ui requirements screen with text information about how to report a problem appbar must have a back button ui img width alt imagem src notes assume the following text lorem ipsum dolor sit amet consectetur adipiscing elit sed do eiusmod tempor incididunt ut labore et dolore magna aliqua in iaculis nunc sed augue a scelerisque purus semper eget duis at tellus at urna quisque sagittis purus sit amet volutpat maecenas volutpat blandit aliquam etiam eu facilisis sed odio morbi quis commodo odio aliquet risus feugiat in ante metus nec ullamcorper sit amet risus libero id faucibus nisl tincidunt eget nullam non consectetur a erat nam at cursus eget nunc scelerisque viverra mauris in aliquam sem at tempor commodo ullamcorper a lacus vestibulum sed urna molestie at elementum eu facilisis sed fermentum dui faucibus in ornare arcu vitae elementum curabitur vitae nunc sed velit dignissim nunc faucibus a pellentesque sit amet porttitor eget nunc sed velit dignissim sodales ut eu sem interdum posuere lorem ipsum dolor eu mi bibendum neque egestas congue quisque convallis convallis tellus id interdum velit laoreet id donec email email com the email should be clickable and it should open a new e mail for email email com
1
284,731
21,467,147,140
IssuesEvent
2022-04-26 05:45:46
alpaka-group/alpaka
https://api.github.com/repos/alpaka-group/alpaka
closed
use constexpr in device function
Type:Question Type:Documentation
Dear Maintainers, when I refer to a C++ `constexpr` function from inside a (inline-) device function, should I mark the `constexpr` as a device function as well? In a short discussion @j-stephan noted that this (probably) depends on the used compiler. Apart from the question: Where/How should this be noted in the documentation? I'd suggest a Notes/Misc/Q&A section, but I haven't found any.
1.0
use constexpr in device function - Dear Maintainers, when I refer to a C++ `constexpr` function from inside a (inline-) device function, should I mark the `constexpr` as a device function as well? In a short discussion @j-stephan noted that this (probably) depends on the used compiler. Apart from the question: Where/How should this be noted in the documentation? I'd suggest a Notes/Misc/Q&A section, but I haven't found any.
non_main
use constexpr in device function dear maintainers when i refer to a c constexpr function from inside a inline device function should i mark the constexpr as a device function as well in a short discussion j stephan noted that this probably depends on the used compiler apart from the question where how should this be noted in the documentation i d suggest a notes misc q a section but i haven t found any
0
63,024
12,278,454,702
IssuesEvent
2020-05-08 10:00:14
fac19/snake
https://api.github.com/repos/fac19/snake
opened
Using a :focus pseudo-class
code review
CSS: The hover on the snakes looks really cool. You could add a :focus pseudo-class so it stays highlighted after you've selected it. Same with the levels
1.0
Using a :focus pseudo-class - CSS: The hover on the snakes looks really cool. You could add a :focus pseudo-class so it stays highlighted after you've selected it. Same with the levels
non_main
using a focus pseudo class css the hover on the snakes looks really cool you could add a focus pseudo class so it stays highlighted after you ve selected it same with the levels
0
313,761
9,575,759,811
IssuesEvent
2019-05-07 07:24:15
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.wnd.com - see bug description
browser-firefox engine-gecko priority-normal type-search-hijacking
<!-- @browser: Firefox 66.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0 --> <!-- @reported_with: --> **URL**: https://www.wnd.com/ **Browser / Version**: Firefox 66.0 **Operating System**: Windows 10 **Tested Another Browser**: No **Problem type**: Something else **Description**: hijacking **Steps to Reproduce**: This has happened before but only when I try to connect to world net daily. A page appears and locks firefox and it's options, all I can do is close it. the web address is : https://retdrapstoningtank.pro/en/?search=_%5E~%7Ft%5BB%25%C9kh%CA%1BE%EFlr%A5%7F%1BM7%A2%B0%5B%8F&list=600000#0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665 gprat304@juno.com [![Screenshot Description](https://webcompat.com/uploads/2019/4/30dcc906-0e35-4cc8-a85f-612ecf9ce09e-thumb.jpg)](https://webcompat.com/uploads/2019/4/30dcc906-0e35-4cc8-a85f-612ecf9ce09e.jpg) <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.wnd.com - see bug description - <!-- @browser: Firefox 66.0 --> <!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0 --> <!-- @reported_with: --> **URL**: https://www.wnd.com/ **Browser / Version**: Firefox 66.0 **Operating System**: Windows 10 **Tested Another Browser**: No **Problem type**: Something else **Description**: hijacking **Steps to Reproduce**: This has happened before but only when I try to connect to world net daily. A page appears and locks firefox and it's options, all I can do is close it. the web address is : https://retdrapstoningtank.pro/en/?search=_%5E~%7Ft%5BB%25%C9kh%CA%1BE%EFlr%A5%7F%1BM7%A2%B0%5B%8F&list=600000#0123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665 gprat304@juno.com [![Screenshot Description](https://webcompat.com/uploads/2019/4/30dcc906-0e35-4cc8-a85f-612ecf9ce09e-thumb.jpg)](https://webcompat.com/uploads/2019/4/30dcc906-0e35-4cc8-a85f-612ecf9ce09e.jpg) <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_main
see bug description url browser version firefox operating system windows tested another browser no problem type something else description hijacking steps to reproduce this has happened before but only when i try to connect to world net daily a page appears and locks firefox and it s options all i can do is close it the web address is juno com browser configuration none from with ❤️
0
272,904
29,795,124,935
IssuesEvent
2023-06-16 01:13:02
billmcchesney1/pacbot
https://api.github.com/repos/billmcchesney1/pacbot
closed
CVE-2018-19360 (Critical) detected in multiple libraries - autoclosed
Mend: dependency security vulnerability
## CVE-2018-19360 - Critical Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.6.7.2.jar</b>, <b>jackson-databind-2.9.6.jar</b>, <b>jackson-databind-2.9.4.jar</b>, <b>jackson-databind-2.8.7.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.6.7.2.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /commons/pac-batch-commons/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar</p> <p> Dependency Hierarchy: - aws-java-sdk-efs-1.11.636.jar (Root Library) - aws-java-sdk-core-1.11.636.jar - :x: **jackson-databind-2.6.7.2.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /api/pacman-api-config/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p> <p> Dependency Hierarchy: - spring-cloud-starter-config-2.0.0.RELEASE.jar (Root Library) - :x: **jackson-databind-2.9.6.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /jobs/azure-discovery/pom.xml</p> <p>Path to vulnerable library: /canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.4.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.8.7.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /jobs/pacman-cloud-notifications/pom.xml</p> <p>Path to vulnerable library: /canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.7.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/pacbot/commit/acf9a0620c1a37cee4f2896d71e1c3731c5c7b06">acf9a0620c1a37cee4f2896d71e1c3731c5c7b06</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.8 might allow attackers to have unspecified impact by leveraging failure to block the axis2-transport-jms class from polymorphic deserialization. <p>Publish Date: 2019-01-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-19360>CVE-2018-19360</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19360">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19360</a></p> <p>Release Date: 2019-01-02</p> <p>Fix Resolution: 2.8.11.3</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2018-19360 (Critical) detected in multiple libraries - autoclosed - ## CVE-2018-19360 - Critical Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.6.7.2.jar</b>, <b>jackson-databind-2.9.6.jar</b>, <b>jackson-databind-2.9.4.jar</b>, <b>jackson-databind-2.8.7.jar</b></p></summary> <p> <details><summary><b>jackson-databind-2.6.7.2.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /commons/pac-batch-commons/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.7.2/jackson-databind-2.6.7.2.jar</p> <p> Dependency Hierarchy: - aws-java-sdk-efs-1.11.636.jar (Root Library) - aws-java-sdk-core-1.11.636.jar - :x: **jackson-databind-2.6.7.2.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /api/pacman-api-config/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p> <p> Dependency Hierarchy: - spring-cloud-starter-config-2.0.0.RELEASE.jar (Root Library) - :x: **jackson-databind-2.9.6.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.9.4.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /jobs/azure-discovery/pom.xml</p> <p>Path to vulnerable library: /canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.9.4.jar** (Vulnerable Library) </details> <details><summary><b>jackson-databind-2.8.7.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /jobs/pacman-cloud-notifications/pom.xml</p> <p>Path to vulnerable library: /canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar,/canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.7/jackson-databind-2.8.7.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.8.7.jar** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/pacbot/commit/acf9a0620c1a37cee4f2896d71e1c3731c5c7b06">acf9a0620c1a37cee4f2896d71e1c3731c5c7b06</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.8 might allow attackers to have unspecified impact by leveraging failure to block the axis2-transport-jms class from polymorphic deserialization. <p>Publish Date: 2019-01-02 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-19360>CVE-2018-19360</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19360">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19360</a></p> <p>Release Date: 2019-01-02</p> <p>Fix Resolution: 2.8.11.3</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_main
cve critical detected in multiple libraries autoclosed cve critical severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file commons pac batch commons pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy aws java sdk efs jar root library aws java sdk core jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file api pacman api config pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring cloud starter config release jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file jobs azure discovery pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file jobs pacman cloud notifications pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before might allow attackers to have unspecified impact by leveraging failure to block the transport jms class from polymorphic deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue
0
5,479
27,371,751,698
IssuesEvent
2023-02-28 00:25:58
aws/aws-sam-cli
https://api.github.com/repos/aws/aws-sam-cli
closed
sam validate fails for cognito lambda trigger
area/validate maintainer/need-response
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. --> ### Description When running the below sam validate or deploying the following template: AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Resources: UserPool: Type: AWS::Cognito::UserPool Properties: AccountRecoverySetting: RecoveryMechanisms: - Name: verified_email Priority: 1 - Name: verified_phone_number Priority: 2 AutoVerifiedAttributes: - email - phone_number DeviceConfiguration: ChallengeRequiredOnNewDevice: true DeviceOnlyRememberedOnUserPrompt: true EmailConfiguration: EmailSendingAccount: COGNITO_DEFAULT EmailVerificationSubject: Cognito Confirmation EmailVerificationMessage: 'Your verification code is {####}.' EnabledMfas: - SMS_MFA - SOFTWARE_TOKEN_MFA MfaConfiguration: OPTIONAL Schema: - Name: family_name AttributeDataType: String Mutable: true Required: true - Name: given_name AttributeDataType: String Mutable: true Required: true - Name: email AttributeDataType: String Mutable: false Required: true - Name: phone_number AttributeDataType: String Mutable: true Required: true UsernameAttributes: - email UserPoolAddOns: AdvancedSecurityMode: ENFORCED PostSignupConfirmationLambda: Type: AWS::Serverless::Function Properties: Runtime: !Ref LambdaRuntime CodeUri: Bucket: !Ref Bucket Key: !Ref LambdaPostSignupConfirmationFunctionS3Key Handler: bin/index.handler Timeout: 20 Events: PostConfirmationEvent: Type: Cognito Properties: UserPool: !Ref UserPool Trigger: PostConfirmation I get the following error Error: [InvalidResourceException('UserPool', 'property AccountRecoverySetting not defined for resource of type AWS::Cognito::UserPool')] ('UserPool', 'property AccountRecoverySetting not defined for resource of type AWS::Cognito::UserPool'). If I remove AccountRecoverySetting I get the following error: Error: [InvalidResourceException('UserPool', 'property EnabledMfas not defined for resource of type AWS::Cognito::UserPool')] ('UserPool', 'property EnabledMfas not defined for resource of type AWS::Cognito::UserPool') If I remove EnabledMfas the validation passes. I can also make it pass removing the Events from the lambda. ### Steps to reproduce Run sam validate on above template ### Observed result sam validate --debug Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics Template provided at 'C:\brokerid\test\template.yaml' was invalid SAM Template. Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam validate', 'duration': 4308, 'exitReason': 'InvalidSamTemplateException', 'exitCode': 1, 'requestId': '45693a59-2b0b-4cdc-9706-7431051aac55', 'installationId': '44afe8f0-166a-4450-b80c-b47b83621b23', 'sessionId': 'd244ff86-d510-4215-8543-3479bdcb1e62', 'executionEnvironment': 'CLI', 'pyversion': '3.7.6', 'samcliVersion': '0.41.0'}}]} HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Error: [InvalidResourceException('UserPool', 'property AccountRecoverySetting not defined for resource of type AWS::Cognito::UserPool')] ('UserPool', 'property AccountRecoverySetting not defined for resource of type AWS::Cognito::UserPool') ### Expected result Validation pass ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Windows 10 2. `sam --version`: 0.41.0 `Add --debug flag to command you are running`
True
sam validate fails for cognito lambda trigger - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed). If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. --> ### Description When running the below sam validate or deploying the following template: AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Resources: UserPool: Type: AWS::Cognito::UserPool Properties: AccountRecoverySetting: RecoveryMechanisms: - Name: verified_email Priority: 1 - Name: verified_phone_number Priority: 2 AutoVerifiedAttributes: - email - phone_number DeviceConfiguration: ChallengeRequiredOnNewDevice: true DeviceOnlyRememberedOnUserPrompt: true EmailConfiguration: EmailSendingAccount: COGNITO_DEFAULT EmailVerificationSubject: Cognito Confirmation EmailVerificationMessage: 'Your verification code is {####}.' EnabledMfas: - SMS_MFA - SOFTWARE_TOKEN_MFA MfaConfiguration: OPTIONAL Schema: - Name: family_name AttributeDataType: String Mutable: true Required: true - Name: given_name AttributeDataType: String Mutable: true Required: true - Name: email AttributeDataType: String Mutable: false Required: true - Name: phone_number AttributeDataType: String Mutable: true Required: true UsernameAttributes: - email UserPoolAddOns: AdvancedSecurityMode: ENFORCED PostSignupConfirmationLambda: Type: AWS::Serverless::Function Properties: Runtime: !Ref LambdaRuntime CodeUri: Bucket: !Ref Bucket Key: !Ref LambdaPostSignupConfirmationFunctionS3Key Handler: bin/index.handler Timeout: 20 Events: PostConfirmationEvent: Type: Cognito Properties: UserPool: !Ref UserPool Trigger: PostConfirmation I get the following error Error: [InvalidResourceException('UserPool', 'property AccountRecoverySetting not defined for resource of type AWS::Cognito::UserPool')] ('UserPool', 'property AccountRecoverySetting not defined for resource of type AWS::Cognito::UserPool'). If I remove AccountRecoverySetting I get the following error: Error: [InvalidResourceException('UserPool', 'property EnabledMfas not defined for resource of type AWS::Cognito::UserPool')] ('UserPool', 'property EnabledMfas not defined for resource of type AWS::Cognito::UserPool') If I remove EnabledMfas the validation passes. I can also make it pass removing the Events from the lambda. ### Steps to reproduce Run sam validate on above template ### Observed result sam validate --debug Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics Template provided at 'C:\brokerid\test\template.yaml' was invalid SAM Template. Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam validate', 'duration': 4308, 'exitReason': 'InvalidSamTemplateException', 'exitCode': 1, 'requestId': '45693a59-2b0b-4cdc-9706-7431051aac55', 'installationId': '44afe8f0-166a-4450-b80c-b47b83621b23', 'sessionId': 'd244ff86-d510-4215-8543-3479bdcb1e62', 'executionEnvironment': 'CLI', 'pyversion': '3.7.6', 'samcliVersion': '0.41.0'}}]} HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1) Error: [InvalidResourceException('UserPool', 'property AccountRecoverySetting not defined for resource of type AWS::Cognito::UserPool')] ('UserPool', 'property AccountRecoverySetting not defined for resource of type AWS::Cognito::UserPool') ### Expected result Validation pass ### Additional environment details (Ex: Windows, Mac, Amazon Linux etc) 1. OS: Windows 10 2. `sam --version`: 0.41.0 `Add --debug flag to command you are running`
main
sam validate fails for cognito lambda trigger make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description when running the below sam validate or deploying the following template awstemplateformatversion transform aws serverless resources userpool type aws cognito userpool properties accountrecoverysetting recoverymechanisms name verified email priority name verified phone number priority autoverifiedattributes email phone number deviceconfiguration challengerequiredonnewdevice true deviceonlyrememberedonuserprompt true emailconfiguration emailsendingaccount cognito default emailverificationsubject cognito confirmation emailverificationmessage your verification code is enabledmfas sms mfa software token mfa mfaconfiguration optional schema name family name attributedatatype string mutable true required true name given name attributedatatype string mutable true required true name email attributedatatype string mutable false required true name phone number attributedatatype string mutable true required true usernameattributes email userpooladdons advancedsecuritymode enforced postsignupconfirmationlambda type aws serverless function properties runtime ref lambdaruntime codeuri bucket ref bucket key ref handler bin index handler timeout events postconfirmationevent type cognito properties userpool ref userpool trigger postconfirmation i get the following error error userpool property accountrecoverysetting not defined for resource of type aws cognito userpool if i remove accountrecoverysetting i get the following error error userpool property enabledmfas not defined for resource of type aws cognito userpool if i remove enabledmfas the validation passes i can also make it pass removing the events from the lambda steps to reproduce run sam validate on above template observed result sam validate debug telemetry endpoint configured to be template provided at c brokerid test template yaml was invalid sam template sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout error userpool property accountrecoverysetting not defined for resource of type aws cognito userpool expected result validation pass additional environment details ex windows mac amazon linux etc os windows sam version add debug flag to command you are running
1
501,392
14,527,285,115
IssuesEvent
2020-12-14 15:10:57
MicroHealthLLC/mGis
https://api.github.com/repos/MicroHealthLLC/mGis
closed
when clicking the title in sheets often times it will select text vs just sorting
Priority
![image](https://user-images.githubusercontent.com/3675043/101497496-0c775b80-3939-11eb-9a27-86b17a20835b.png) when it selects text, you cannot sort. you then have to click another part of the screen to then sort again until the text gets mistakenly selected again
1.0
when clicking the title in sheets often times it will select text vs just sorting - ![image](https://user-images.githubusercontent.com/3675043/101497496-0c775b80-3939-11eb-9a27-86b17a20835b.png) when it selects text, you cannot sort. you then have to click another part of the screen to then sort again until the text gets mistakenly selected again
non_main
when clicking the title in sheets often times it will select text vs just sorting when it selects text you cannot sort you then have to click another part of the screen to then sort again until the text gets mistakenly selected again
0
92,071
3,865,242,531
IssuesEvent
2016-04-08 16:39:21
AlexisNichel/bioid-mobile-base
https://api.github.com/repos/AlexisNichel/bioid-mobile-base
closed
Obtener Usuarios
high-priority
Crear un método syncUsers en un service sync.js con dependencia de api.js. Lo que va a hacer es primero obtener las huellas de localstorage sino crear un array vacío. Luego guardar las huellas por un lado para que se consulten más rápido con el lector y los datos del usuario por otro. ``` js //$api.getUsers().then(function(apiUsers){ var users = apiUsers; var fingerPrints = $localstorage.getObject('fingerPrints ') || [], userData = $localstorage.getObject('users') || [] angular.forEach(usuarios, function(value) { fingerPrints[value.IdHuella] = value.Huella; delete value.Huella; userData[value.IdHuella] = value; }) $localstorage.setObject('fingerprints ',fingerprints ); $localstorage.setObject('users', userData ); $localstorage.setObject('lastSync', new Date()); //Llamar a splashScreen para escribir huellas con writeTemplate }) ``` Además, si la sincronización fue errónea dejarlo asentado en el log.
1.0
Obtener Usuarios - Crear un método syncUsers en un service sync.js con dependencia de api.js. Lo que va a hacer es primero obtener las huellas de localstorage sino crear un array vacío. Luego guardar las huellas por un lado para que se consulten más rápido con el lector y los datos del usuario por otro. ``` js //$api.getUsers().then(function(apiUsers){ var users = apiUsers; var fingerPrints = $localstorage.getObject('fingerPrints ') || [], userData = $localstorage.getObject('users') || [] angular.forEach(usuarios, function(value) { fingerPrints[value.IdHuella] = value.Huella; delete value.Huella; userData[value.IdHuella] = value; }) $localstorage.setObject('fingerprints ',fingerprints ); $localstorage.setObject('users', userData ); $localstorage.setObject('lastSync', new Date()); //Llamar a splashScreen para escribir huellas con writeTemplate }) ``` Además, si la sincronización fue errónea dejarlo asentado en el log.
non_main
obtener usuarios crear un método syncusers en un service sync js con dependencia de api js lo que va a hacer es primero obtener las huellas de localstorage sino crear un array vacío luego guardar las huellas por un lado para que se consulten más rápido con el lector y los datos del usuario por otro js api getusers then function apiusers var users apiusers var fingerprints localstorage getobject fingerprints userdata localstorage getobject users angular foreach usuarios function value fingerprints value huella delete value huella userdata value localstorage setobject fingerprints fingerprints localstorage setobject users userdata localstorage setobject lastsync new date llamar a splashscreen para escribir huellas con writetemplate además si la sincronización fue errónea dejarlo asentado en el log
0
78,020
15,569,910,964
IssuesEvent
2021-03-17 01:16:45
Killy85/python-bot
https://api.github.com/repos/Killy85/python-bot
opened
CVE-2020-35655 (Medium) detected in Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl, Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl
security vulnerability
## CVE-2020-35655 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl</b>, <b>Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p> <details><summary><b>Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>Python Imaging Library (Fork)</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/0d/f3/421598450cb9503f4565d936860763b5af413a61009d87a5ab1e34139672/Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/0d/f3/421598450cb9503f4565d936860763b5af413a61009d87a5ab1e34139672/Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to vulnerable library: python-bot/requirements.txt</p> <p> Dependency Hierarchy: - :x: **Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </details> <details><summary><b>Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>Python Imaging Library (Fork)</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to vulnerable library: python-bot/requirements.txt</p> <p> Dependency Hierarchy: - :x: **Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Pillow before 8.1.0, SGIRleDecode has a 4-byte buffer over-read when decoding crafted SGI RLE image files because offsets and length tables are mishandled. <p>Publish Date: 2021-01-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35655>CVE-2020-35655</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655</a></p> <p>Release Date: 2021-01-12</p> <p>Fix Resolution: 8.1.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-35655 (Medium) detected in Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl, Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl - ## CVE-2020-35655 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl</b>, <b>Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p> <details><summary><b>Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>Python Imaging Library (Fork)</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/0d/f3/421598450cb9503f4565d936860763b5af413a61009d87a5ab1e34139672/Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/0d/f3/421598450cb9503f4565d936860763b5af413a61009d87a5ab1e34139672/Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to vulnerable library: python-bot/requirements.txt</p> <p> Dependency Hierarchy: - :x: **Pillow-5.4.1-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </details> <details><summary><b>Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary> <p>Python Imaging Library (Fork)</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/b6/4b/5adc1109908266554fb978154c797c7d71aba43dd15508d8c1565648f6bc/Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p> <p>Path to vulnerable library: python-bot/requirements.txt</p> <p> Dependency Hierarchy: - :x: **Pillow-6.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Pillow before 8.1.0, SGIRleDecode has a 4-byte buffer over-read when decoding crafted SGI RLE image files because offsets and length tables are mishandled. <p>Publish Date: 2021-01-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-35655>CVE-2020-35655</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-35655</a></p> <p>Release Date: 2021-01-12</p> <p>Fix Resolution: 8.1.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve medium detected in pillow whl pillow whl cve medium severity vulnerability vulnerable libraries pillow whl pillow whl pillow whl python imaging library fork library home page a href path to vulnerable library python bot requirements txt dependency hierarchy x pillow whl vulnerable library pillow whl python imaging library fork library home page a href path to vulnerable library python bot requirements txt dependency hierarchy x pillow whl vulnerable library vulnerability details in pillow before sgirledecode has a byte buffer over read when decoding crafted sgi rle image files because offsets and length tables are mishandled publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
84,321
7,916,839,578
IssuesEvent
2018-07-04 07:54:37
edenlabllc/ehealth.api
https://api.github.com/repos/edenlabllc/ehealth.api
closed
actualize scope model including contracts functionality
epic/contracts kind/task status/test
Добавьте пожалуйста scope для работы с контрактами в раздел - https://edenlab.atlassian.net/wiki/spaces/EH/pages/2004415/Scopes+model
1.0
actualize scope model including contracts functionality - Добавьте пожалуйста scope для работы с контрактами в раздел - https://edenlab.atlassian.net/wiki/spaces/EH/pages/2004415/Scopes+model
non_main
actualize scope model including contracts functionality добавьте пожалуйста scope для работы с контрактами в раздел
0
19,789
10,422,205,241
IssuesEvent
2019-09-16 08:28:33
Holo-Host/holo-nixpkgs
https://api.github.com/repos/Holo-Host/holo-nixpkgs
opened
profiles/hydra: set up sudo_pair
project: Hydra topic: security
Hydra servers are the most sensitive part of our setup. We should use something like [sudo_pair](https://github.com/square/sudo_pair) to authenticate privileged access.
True
profiles/hydra: set up sudo_pair - Hydra servers are the most sensitive part of our setup. We should use something like [sudo_pair](https://github.com/square/sudo_pair) to authenticate privileged access.
non_main
profiles hydra set up sudo pair hydra servers are the most sensitive part of our setup we should use something like to authenticate privileged access
0
106,318
9,126,155,947
IssuesEvent
2019-02-24 19:24:47
NCIOCPL/cgov-digital-platform
https://api.github.com/repos/NCIOCPL/cgov-digital-platform
opened
Test Case: Verify Posted Date
Test Case
# Pre-requisites * Create YAML Content for all content type pages below * Import YAML Content to test environment # Content Types Covered * Article - * Press Release - * Blog - * Cancer Type - * Bio - # Test scenarios: 1. 2. # Depends on Epic *
1.0
Test Case: Verify Posted Date - # Pre-requisites * Create YAML Content for all content type pages below * Import YAML Content to test environment # Content Types Covered * Article - * Press Release - * Blog - * Cancer Type - * Bio - # Test scenarios: 1. 2. # Depends on Epic *
non_main
test case verify posted date pre requisites create yaml content for all content type pages below import yaml content to test environment content types covered article press release blog cancer type bio test scenarios depends on epic
0
40,379
20,814,323,653
IssuesEvent
2022-03-18 08:31:50
hzi-braunschweig/SORMAS-Project
https://api.github.com/repos/hzi-braunschweig/SORMAS-Project
closed
Investigate performance of reporting tool filter
cases important feedback performance task
<!-- If you've never submitted an issue to the SORMAS repository before or this is your first time using this template, please read the Contributing guidelines (https://github.com/hzi-braunschweig/SORMAS-Project/blob/development/docs/CONTRIBUTING.md) for an explanation of the information we need you to provide. You don't have to remove this comment or any other comment from this issue as they will automatically be hidden. --> ### Problem Description <!-- Mandatory --> Users report that filtering for "Only cases changed since last shared with reporting tool" takes a very long time. ### Proposed Solution <!-- Mandatory --> - [ ] Investigate performance implications for this filter when handling a large number of cases that fulfill the filter condition ### Possible Alternatives <!-- Optional --> ### Additional Information <!-- Optional --> The filters are only active when the SurvNet Converter is configured on an instance.
True
Investigate performance of reporting tool filter - <!-- If you've never submitted an issue to the SORMAS repository before or this is your first time using this template, please read the Contributing guidelines (https://github.com/hzi-braunschweig/SORMAS-Project/blob/development/docs/CONTRIBUTING.md) for an explanation of the information we need you to provide. You don't have to remove this comment or any other comment from this issue as they will automatically be hidden. --> ### Problem Description <!-- Mandatory --> Users report that filtering for "Only cases changed since last shared with reporting tool" takes a very long time. ### Proposed Solution <!-- Mandatory --> - [ ] Investigate performance implications for this filter when handling a large number of cases that fulfill the filter condition ### Possible Alternatives <!-- Optional --> ### Additional Information <!-- Optional --> The filters are only active when the SurvNet Converter is configured on an instance.
non_main
investigate performance of reporting tool filter if you ve never submitted an issue to the sormas repository before or this is your first time using this template please read the contributing guidelines for an explanation of the information we need you to provide you don t have to remove this comment or any other comment from this issue as they will automatically be hidden problem description users report that filtering for only cases changed since last shared with reporting tool takes a very long time proposed solution investigate performance implications for this filter when handling a large number of cases that fulfill the filter condition possible alternatives additional information the filters are only active when the survnet converter is configured on an instance
0
3,596
4,427,946,568
IssuesEvent
2016-08-16 23:29:11
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Encapsulate dev workflow common logic
enhancement Infrastructure
Add a script that encapsulates the common logic used in the dev workflow (i.e. OSGroup selection, common logging, init-tools, etc) and have all other dev workflow scripts (sync, clean, build-test, etc) simply call it with extra arguments unique to each dev workflow script. As mentioned by @weshaggard in PR #7566
1.0
Encapsulate dev workflow common logic - Add a script that encapsulates the common logic used in the dev workflow (i.e. OSGroup selection, common logging, init-tools, etc) and have all other dev workflow scripts (sync, clean, build-test, etc) simply call it with extra arguments unique to each dev workflow script. As mentioned by @weshaggard in PR #7566
non_main
encapsulate dev workflow common logic add a script that encapsulates the common logic used in the dev workflow i e osgroup selection common logging init tools etc and have all other dev workflow scripts sync clean build test etc simply call it with extra arguments unique to each dev workflow script as mentioned by weshaggard in pr
0
53,702
13,879,975,283
IssuesEvent
2020-10-17 16:43:51
Theatreers/Theatreers
https://api.github.com/repos/Theatreers/Theatreers
opened
CVE-2020-7662 (High) detected in websocket-extensions-0.1.3.tgz
security vulnerability
## CVE-2020-7662 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>websocket-extensions-0.1.3.tgz</b></p></summary> <p>Generic extension manager for WebSocket connections</p> <p>Library home page: <a href="https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.3.tgz">https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.3.tgz</a></p> <p>Path to dependency file: Theatreers/src/Theatreers.Frontend/package.json</p> <p>Path to vulnerable library: Theatreers/src/Theatreers.Frontend.Old/node_modules/websocket-extensions/package.json,Theatreers/src/Theatreers.Frontend.Old/node_modules/websocket-extensions/package.json</p> <p> Dependency Hierarchy: - cli-service-4.0.5.tgz (Root Library) - webpack-dev-server-3.9.0.tgz - sockjs-0.3.19.tgz - faye-websocket-0.10.0.tgz - websocket-driver-0.7.3.tgz - :x: **websocket-extensions-0.1.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Theatreers/Theatreers/commit/5b84ea045b36c4ad6f9fda41cea95252584b7e55">5b84ea045b36c4ad6f9fda41cea95252584b7e55</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> websocket-extensions npm module prior to 1.0.4 allows Denial of Service (DoS) via Regex Backtracking. The extension parser may take quadratic time when parsing a header containing an unclosed string parameter value whose content is a repeating two-byte sequence of a backslash and some other character. This could be abused by an attacker to conduct Regex Denial Of Service (ReDoS) on a single-threaded server by providing a malicious payload with the Sec-WebSocket-Extensions header. <p>Publish Date: 2020-06-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7662>CVE-2020-7662</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7662">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7662</a></p> <p>Release Date: 2020-06-02</p> <p>Fix Resolution: websocket-extensions:0.1.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7662 (High) detected in websocket-extensions-0.1.3.tgz - ## CVE-2020-7662 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>websocket-extensions-0.1.3.tgz</b></p></summary> <p>Generic extension manager for WebSocket connections</p> <p>Library home page: <a href="https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.3.tgz">https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.3.tgz</a></p> <p>Path to dependency file: Theatreers/src/Theatreers.Frontend/package.json</p> <p>Path to vulnerable library: Theatreers/src/Theatreers.Frontend.Old/node_modules/websocket-extensions/package.json,Theatreers/src/Theatreers.Frontend.Old/node_modules/websocket-extensions/package.json</p> <p> Dependency Hierarchy: - cli-service-4.0.5.tgz (Root Library) - webpack-dev-server-3.9.0.tgz - sockjs-0.3.19.tgz - faye-websocket-0.10.0.tgz - websocket-driver-0.7.3.tgz - :x: **websocket-extensions-0.1.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Theatreers/Theatreers/commit/5b84ea045b36c4ad6f9fda41cea95252584b7e55">5b84ea045b36c4ad6f9fda41cea95252584b7e55</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> websocket-extensions npm module prior to 1.0.4 allows Denial of Service (DoS) via Regex Backtracking. The extension parser may take quadratic time when parsing a header containing an unclosed string parameter value whose content is a repeating two-byte sequence of a backslash and some other character. This could be abused by an attacker to conduct Regex Denial Of Service (ReDoS) on a single-threaded server by providing a malicious payload with the Sec-WebSocket-Extensions header. <p>Publish Date: 2020-06-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7662>CVE-2020-7662</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7662">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7662</a></p> <p>Release Date: 2020-06-02</p> <p>Fix Resolution: websocket-extensions:0.1.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_main
cve high detected in websocket extensions tgz cve high severity vulnerability vulnerable library websocket extensions tgz generic extension manager for websocket connections library home page a href path to dependency file theatreers src theatreers frontend package json path to vulnerable library theatreers src theatreers frontend old node modules websocket extensions package json theatreers src theatreers frontend old node modules websocket extensions package json dependency hierarchy cli service tgz root library webpack dev server tgz sockjs tgz faye websocket tgz websocket driver tgz x websocket extensions tgz vulnerable library found in head commit a href found in base branch master vulnerability details websocket extensions npm module prior to allows denial of service dos via regex backtracking the extension parser may take quadratic time when parsing a header containing an unclosed string parameter value whose content is a repeating two byte sequence of a backslash and some other character this could be abused by an attacker to conduct regex denial of service redos on a single threaded server by providing a malicious payload with the sec websocket extensions header publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution websocket extensions step up your open source security game with whitesource
0
5,385
27,068,920,714
IssuesEvent
2023-02-14 04:17:16
diofant/diofant
https://api.github.com/repos/diofant/diofant
closed
Throw away some modules
maintainability
Probably, some less important stuff (diffgeom, geometry, vector, stats, plotting) should be maintained separately, as independent packages, which will require diofant.
True
Throw away some modules - Probably, some less important stuff (diffgeom, geometry, vector, stats, plotting) should be maintained separately, as independent packages, which will require diofant.
main
throw away some modules probably some less important stuff diffgeom geometry vector stats plotting should be maintained separately as independent packages which will require diofant
1
250,550
21,315,101,042
IssuesEvent
2022-04-16 06:10:13
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
roachtest: import/tpch/nodes=8 failed
C-test-failure O-robot O-roachtest branch-master release-blocker
roachtest.import/tpch/nodes=8 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4907635&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4907635&tab=artifacts#/import/tpch/nodes=8) on master @ [771432d1099e516dbc11827c5458886c176e73e3](https://github.com/cockroachdb/cockroach/commits/771432d1099e516dbc11827c5458886c176e73e3): ``` The test failed on branch=master, cloud=gce: test artifacts and logs in: /artifacts/import/tpch/nodes=8/run_1 monitor.go:127,import.go:312,test_runner.go:875: monitor failure: monitor task failed: read tcp 172.17.0.3:54490 -> 34.139.206.51:26257: read: connection reset by peer (1) attached stack trace -- stack trace: | main.(*monitorImpl).WaitE | main/pkg/cmd/roachtest/monitor.go:115 | main.(*monitorImpl).Wait | main/pkg/cmd/roachtest/monitor.go:123 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerImportTPCH.func1 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/import.go:312 | [...repeated from below...] Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitorImpl).wait.func2 | main/pkg/cmd/roachtest/monitor.go:171 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1581 Wraps: (4) monitor task failed Wraps: (5) read tcp 172.17.0.3:54490 -> 34.139.206.51:26257 Wraps: (6) read Wraps: (7) connection reset by peer Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *net.OpError (6) *os.SyscallError (7) syscall.Errno ``` <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/bulk-io <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*import/tpch/nodes=8.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
2.0
roachtest: import/tpch/nodes=8 failed - roachtest.import/tpch/nodes=8 [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4907635&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4907635&tab=artifacts#/import/tpch/nodes=8) on master @ [771432d1099e516dbc11827c5458886c176e73e3](https://github.com/cockroachdb/cockroach/commits/771432d1099e516dbc11827c5458886c176e73e3): ``` The test failed on branch=master, cloud=gce: test artifacts and logs in: /artifacts/import/tpch/nodes=8/run_1 monitor.go:127,import.go:312,test_runner.go:875: monitor failure: monitor task failed: read tcp 172.17.0.3:54490 -> 34.139.206.51:26257: read: connection reset by peer (1) attached stack trace -- stack trace: | main.(*monitorImpl).WaitE | main/pkg/cmd/roachtest/monitor.go:115 | main.(*monitorImpl).Wait | main/pkg/cmd/roachtest/monitor.go:123 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerImportTPCH.func1 | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/import.go:312 | [...repeated from below...] Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitorImpl).wait.func2 | main/pkg/cmd/roachtest/monitor.go:171 | runtime.goexit | GOROOT/src/runtime/asm_amd64.s:1581 Wraps: (4) monitor task failed Wraps: (5) read tcp 172.17.0.3:54490 -> 34.139.206.51:26257 Wraps: (6) read Wraps: (7) connection reset by peer Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *net.OpError (6) *os.SyscallError (7) syscall.Errno ``` <details><summary>Help</summary> <p> See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md) See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7) </p> </details> /cc @cockroachdb/bulk-io <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*import/tpch/nodes=8.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
non_main
roachtest import tpch nodes failed roachtest import tpch nodes with on master the test failed on branch master cloud gce test artifacts and logs in artifacts import tpch nodes run monitor go import go test runner go monitor failure monitor task failed read tcp read connection reset by peer attached stack trace stack trace main monitorimpl waite main pkg cmd roachtest monitor go main monitorimpl wait main pkg cmd roachtest monitor go github com cockroachdb cockroach pkg cmd roachtest tests registerimporttpch github com cockroachdb cockroach pkg cmd roachtest tests import go wraps monitor failure wraps attached stack trace stack trace main monitorimpl wait main pkg cmd roachtest monitor go runtime goexit goroot src runtime asm s wraps monitor task failed wraps read tcp wraps read wraps connection reset by peer error types withstack withstack errutil withprefix withstack withstack errutil withprefix net operror os syscallerror syscall errno help see see cc cockroachdb bulk io
0
2,352
8,406,165,520
IssuesEvent
2018-10-11 17:08:59
tgstation/tgstation-server
https://api.github.com/repos/tgstation/tgstation-server
opened
Abstract away calls to RuntimeInformation.IsOSPlatform
Backlog Maintainability Issue
Put it in a `ISystemInformation` interface so we can properly mock it
True
Abstract away calls to RuntimeInformation.IsOSPlatform - Put it in a `ISystemInformation` interface so we can properly mock it
main
abstract away calls to runtimeinformation isosplatform put it in a isysteminformation interface so we can properly mock it
1
536,877
15,716,090,282
IssuesEvent
2021-03-28 05:16:51
quacs/quacs
https://api.github.com/repos/quacs/quacs
closed
change schedule.ics to {course_set_name}.ics
High Priority good first issue
change schedule.ics to {course_set_name}.ics Should be really easy to do, probably one line of code
1.0
change schedule.ics to {course_set_name}.ics - change schedule.ics to {course_set_name}.ics Should be really easy to do, probably one line of code
non_main
change schedule ics to course set name ics change schedule ics to course set name ics should be really easy to do probably one line of code
0
50,355
21,076,589,045
IssuesEvent
2022-04-02 08:21:59
emergenzeHack/ukrainehelp.emergenzehack.info_segnalazioni
https://api.github.com/repos/emergenzeHack/ukrainehelp.emergenzehack.info_segnalazioni
opened
https://www.raiplay.it/benvenuti-bambini Cartoni animati in lingua italiana e ucraina (contenuti gr
Services translation Children
<pre><yamldata> servicetypes: materialGoods: false hospitality: false transport: false healthcare: false Legal: false translation: true job: false psychologicalSupport: false Children: true disability: false women: false education: false offerFromWho: Raiplay title: https://www.raiplay.it/benvenuti-bambini Cartoni animati in lingua italiana e ucraina (contenuti gratuiti). recipients: '' description: '' url: https://www.raiplay.it/benvenuti-bambini address: mode: autocomplete address: place_id: 283767136 licence: Data © OpenStreetMap contributors, ODbL 1.0. https://osm.org/copyright osm_type: relation osm_id: 41485 boundingbox: - '41.6556417' - '42.1410285' - '12.2344669' - '12.8557603' lat: '41.8933203' lon: '12.4829321' display_name: Roma, Roma Capitale, Lazio, Italia class: boundary type: administrative importance: 0.7896107180689524 icon: https://nominatim.openstreetmap.org/ui/mapicons//poi_boundary_administrative.p.20.png address: city: Roma county: Roma Capitale state: Lazio country: Italia country_code: it iConfirmToHaveReadAndAcceptedInformativeToThreatPersonalData: true label: services submit: true </yamldata></pre>
1.0
https://www.raiplay.it/benvenuti-bambini Cartoni animati in lingua italiana e ucraina (contenuti gr - <pre><yamldata> servicetypes: materialGoods: false hospitality: false transport: false healthcare: false Legal: false translation: true job: false psychologicalSupport: false Children: true disability: false women: false education: false offerFromWho: Raiplay title: https://www.raiplay.it/benvenuti-bambini Cartoni animati in lingua italiana e ucraina (contenuti gratuiti). recipients: '' description: '' url: https://www.raiplay.it/benvenuti-bambini address: mode: autocomplete address: place_id: 283767136 licence: Data © OpenStreetMap contributors, ODbL 1.0. https://osm.org/copyright osm_type: relation osm_id: 41485 boundingbox: - '41.6556417' - '42.1410285' - '12.2344669' - '12.8557603' lat: '41.8933203' lon: '12.4829321' display_name: Roma, Roma Capitale, Lazio, Italia class: boundary type: administrative importance: 0.7896107180689524 icon: https://nominatim.openstreetmap.org/ui/mapicons//poi_boundary_administrative.p.20.png address: city: Roma county: Roma Capitale state: Lazio country: Italia country_code: it iConfirmToHaveReadAndAcceptedInformativeToThreatPersonalData: true label: services submit: true </yamldata></pre>
non_main
cartoni animati in lingua italiana e ucraina contenuti gr servicetypes materialgoods false hospitality false transport false healthcare false legal false translation true job false psychologicalsupport false children true disability false women false education false offerfromwho raiplay title cartoni animati in lingua italiana e ucraina contenuti gratuiti recipients description url address mode autocomplete address place id licence data © openstreetmap contributors odbl osm type relation osm id boundingbox lat lon display name roma roma capitale lazio italia class boundary type administrative importance icon address city roma county roma capitale state lazio country italia country code it iconfirmtohavereadandacceptedinformativetothreatpersonaldata true label services submit true
0
1,905
6,577,561,632
IssuesEvent
2017-09-12 01:46:32
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
S3 module doesn't use AWS Signature Version 4
affects_2.0 aws bug_report cloud waiting_on_maintainer
##### Issue Type: - Bug Report ##### Ansible Version: ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### Ansible Configuration: NONE ##### Environment: - CentOS 7 - Ubuntu 14.04 ##### Summary: Using Ansible's S3 module I am not able to copy server-side encrypted files (Server Side Encryption with AWS KMS managed keys) from bucket to local directory although all needed settings are set and AWSCLI works well. EC2 instance has IAM role with permissions to use the appropriate KMS key and access the bucket. I don't set variables access_key and access_secret_key explicitly. I am able to get non-encrypted files from the same bucket using Ansible's S3 module. ##### Steps To Reproduce: CentOS7: - yum install epel-release - yum install python-pip - yum install --enablerepo epel-testing ansible - pip install --upgrade awscli - aws configure set s3.signature_version s3v4 - ansible -vvv localhost -c local -m s3 -a 'region=eu-west-1 mode=get bucket=my-bucket object=/id_rsa dest=/root/.ssh/id_rsa' But this works as expected: - aws s3 cp 's3://my-bucket/id_rsa' /root/.ssh/id_rsa --region eu-west-1 ##### Expected Results: File id_rsa occurs in the /root/.ssh/ directory. ##### Actual Results: Using /etc/ansible/ansible.cfg as config file ESTABLISH LOCAL CONNECTION FOR USER: root 127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705 )" ) 127.0.0.1 PUT /tmp/tmppZN1Re TO /root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3 127.0.0.1 EXEC LANG=en_US.utf8 LC_ALL=en_US.utf8 LC_MESSAGES=en_US.utf8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3; rm -rf "/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/" > /dev/null 2>&1 An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3", line 2823, in <module> main() File "/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3", line 496, in main download_s3file(module, s3, bucket, obj, dest, retries, version=version) File "/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3", line 323, in download_s3file key.get_contents_to_filename(dest) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 1665, in get_contents_to_filename response_headers=response_headers) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 1603, in get_contents_to_file response_headers=response_headers) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 1435, in get_file query_args=None) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 1467, in _get_file_internal override_num_retries=override_num_retries) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 325, in open override_num_retries=override_num_retries) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 273, in open_read self.resp.reason, body) boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidArgument</Code><Message>Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.</Message><ArgumentName>Authorization</ArgumentName><ArgumentValue>null</ArgumentValue><RequestId>...</RequestId><HostId>...</HostId></Error> localhost | FAILED! => { "changed": false, "failed": true, "invocation": { "module_name": "s3" }, "parsed": false }
True
S3 module doesn't use AWS Signature Version 4 - ##### Issue Type: - Bug Report ##### Ansible Version: ansible 2.0.0.2 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ##### Ansible Configuration: NONE ##### Environment: - CentOS 7 - Ubuntu 14.04 ##### Summary: Using Ansible's S3 module I am not able to copy server-side encrypted files (Server Side Encryption with AWS KMS managed keys) from bucket to local directory although all needed settings are set and AWSCLI works well. EC2 instance has IAM role with permissions to use the appropriate KMS key and access the bucket. I don't set variables access_key and access_secret_key explicitly. I am able to get non-encrypted files from the same bucket using Ansible's S3 module. ##### Steps To Reproduce: CentOS7: - yum install epel-release - yum install python-pip - yum install --enablerepo epel-testing ansible - pip install --upgrade awscli - aws configure set s3.signature_version s3v4 - ansible -vvv localhost -c local -m s3 -a 'region=eu-west-1 mode=get bucket=my-bucket object=/id_rsa dest=/root/.ssh/id_rsa' But this works as expected: - aws s3 cp 's3://my-bucket/id_rsa' /root/.ssh/id_rsa --region eu-west-1 ##### Expected Results: File id_rsa occurs in the /root/.ssh/ directory. ##### Actual Results: Using /etc/ansible/ansible.cfg as config file ESTABLISH LOCAL CONNECTION FOR USER: root 127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705 )" ) 127.0.0.1 PUT /tmp/tmppZN1Re TO /root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3 127.0.0.1 EXEC LANG=en_US.utf8 LC_ALL=en_US.utf8 LC_MESSAGES=en_US.utf8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3; rm -rf "/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/" > /dev/null 2>&1 An exception occurred during task execution. The full traceback is: Traceback (most recent call last): File "/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3", line 2823, in <module> main() File "/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3", line 496, in main download_s3file(module, s3, bucket, obj, dest, retries, version=version) File "/root/.ansible/tmp/ansible-tmp-1456127493.26-47301518663705/s3", line 323, in download_s3file key.get_contents_to_filename(dest) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 1665, in get_contents_to_filename response_headers=response_headers) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 1603, in get_contents_to_file response_headers=response_headers) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 1435, in get_file query_args=None) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 1467, in _get_file_internal override_num_retries=override_num_retries) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 325, in open override_num_retries=override_num_retries) File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 273, in open_read self.resp.reason, body) boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidArgument</Code><Message>Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.</Message><ArgumentName>Authorization</ArgumentName><ArgumentValue>null</ArgumentValue><RequestId>...</RequestId><HostId>...</HostId></Error> localhost | FAILED! => { "changed": false, "failed": true, "invocation": { "module_name": "s3" }, "parsed": false }
main
module doesn t use aws signature version issue type bug report ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides ansible configuration none environment centos ubuntu summary using ansible s module i am not able to copy server side encrypted files server side encryption with aws kms managed keys from bucket to local directory although all needed settings are set and awscli works well instance has iam role with permissions to use the appropriate kms key and access the bucket i don t set variables access key and access secret key explicitly i am able to get non encrypted files from the same bucket using ansible s module steps to reproduce yum install epel release yum install python pip yum install enablerepo epel testing ansible pip install upgrade awscli aws configure set signature version ansible vvv localhost c local m a region eu west mode get bucket my bucket object id rsa dest root ssh id rsa but this works as expected aws cp my bucket id rsa root ssh id rsa region eu west expected results file id rsa occurs in the root ssh directory actual results using etc ansible ansible cfg as config file establish local connection for user root exec umask mkdir p echo home ansible tmp ansible tmp echo echo home ansible tmp ansible tmp put tmp to root ansible tmp ansible tmp exec lang en us lc all en us lc messages en us usr bin python root ansible tmp ansible tmp rm rf root ansible tmp ansible tmp dev null an exception occurred during task execution the full traceback is traceback most recent call last file root ansible tmp ansible tmp line in main file root ansible tmp ansible tmp line in main download module bucket obj dest retries version version file root ansible tmp ansible tmp line in download key get contents to filename dest file usr lib site packages boto key py line in get contents to filename response headers response headers file usr lib site packages boto key py line in get contents to file response headers response headers file usr lib site packages boto key py line in get file query args none file usr lib site packages boto key py line in get file internal override num retries override num retries file usr lib site packages boto key py line in open override num retries override num retries file usr lib site packages boto key py line in open read self resp reason body boto exception bad request invalidargument requests specifying server side encryption with aws kms managed keys require aws signature version authorization null localhost failed changed false failed true invocation module name parsed false
1
4,312
21,708,694,237
IssuesEvent
2022-05-10 12:05:20
svengreb/golib
https://api.github.com/repos/svengreb/golib
closed
Update to `tmpl-go` template repository version `0.12.0`
type-improvement context-workflow scope-compatibility scope-maintainability
Update to [`tmpl-go` version `0.12.0`][1], including the versions in between starting from [0.7.0][3]: 1. [Updated to Go 1.17][4]. 2. [Updated to golangci-lint `v1.43.0`][5]. 3. [Updated to `tmpl` template repository version `0.11.0`][6]. 4. [Optimized GitHub action workflows for Go and Node][7]. 5. [Disabled golangci-lint's default excluded issues][8]. 6. [Introduced Go dependency caching and build outputs in `ci-go` workflow][9]. 7. [Disabled revive linter rule `package-comments`][10]. 8. [Fixed golangci-lint fail to run due to `revives` unknown `time-equal` rule][11]. 9. [Updated Node.js packages & GitHub actions][12]. This will also include changes required for any linter matches. [1]: https://github.com/svengreb/tmpl-go/releases/tag/v0.12.0 [3]: https://github.com/svengreb/tmpl-go/releases/tag/v0.7.0 [4]: https://github.com/svengreb/tmpl-go/issues/66 [5]: https://github.com/svengreb/tmpl-go/issues/64 [6]: https://github.com/svengreb/tmpl-go/issues/91 [7]: https://github.com/svengreb/tmpl-go/issues/68 [8]: https://github.com/svengreb/tmpl-go/issues/72 [9]: https://github.com/svengreb/tmpl-go/issues/74 [10]: https://github.com/svengreb/tmpl-go/issues/78 [11]: https://github.com/svengreb/tmpl-go/issues/76 [12]: https://github.com/svengreb/tmpl-go/issues/42
True
Update to `tmpl-go` template repository version `0.12.0` - Update to [`tmpl-go` version `0.12.0`][1], including the versions in between starting from [0.7.0][3]: 1. [Updated to Go 1.17][4]. 2. [Updated to golangci-lint `v1.43.0`][5]. 3. [Updated to `tmpl` template repository version `0.11.0`][6]. 4. [Optimized GitHub action workflows for Go and Node][7]. 5. [Disabled golangci-lint's default excluded issues][8]. 6. [Introduced Go dependency caching and build outputs in `ci-go` workflow][9]. 7. [Disabled revive linter rule `package-comments`][10]. 8. [Fixed golangci-lint fail to run due to `revives` unknown `time-equal` rule][11]. 9. [Updated Node.js packages & GitHub actions][12]. This will also include changes required for any linter matches. [1]: https://github.com/svengreb/tmpl-go/releases/tag/v0.12.0 [3]: https://github.com/svengreb/tmpl-go/releases/tag/v0.7.0 [4]: https://github.com/svengreb/tmpl-go/issues/66 [5]: https://github.com/svengreb/tmpl-go/issues/64 [6]: https://github.com/svengreb/tmpl-go/issues/91 [7]: https://github.com/svengreb/tmpl-go/issues/68 [8]: https://github.com/svengreb/tmpl-go/issues/72 [9]: https://github.com/svengreb/tmpl-go/issues/74 [10]: https://github.com/svengreb/tmpl-go/issues/78 [11]: https://github.com/svengreb/tmpl-go/issues/76 [12]: https://github.com/svengreb/tmpl-go/issues/42
main
update to tmpl go template repository version update to including the versions in between starting from this will also include changes required for any linter matches
1
459
3,640,520,370
IssuesEvent
2016-02-13 00:54:37
dotnet/roslyn-analyzers
https://api.github.com/repos/dotnet/roslyn-analyzers
closed
Port FxCop rule CA1801: ReviewUnusedParameters
Area-Microsoft.Maintainability.Analyzers FxCop-Port Urgency-Soon
**Title:** Review unused parameters **Description:** A method signature includes a parameter that is not used in the method body. **Dependency:** None, can be based on: https://github.com/dotnet/roslyn/blob/master/src/Samples/CSharp/Analyzers/CSharpAnalyzers/CSharpAnalyzers/StatefulAnalyzers/CodeBlockStartedAnalyzer.cs **Notes:** Don't fire if the parameter comes from an interface you're implementing or a virtual method you're overriding.
True
Port FxCop rule CA1801: ReviewUnusedParameters - **Title:** Review unused parameters **Description:** A method signature includes a parameter that is not used in the method body. **Dependency:** None, can be based on: https://github.com/dotnet/roslyn/blob/master/src/Samples/CSharp/Analyzers/CSharpAnalyzers/CSharpAnalyzers/StatefulAnalyzers/CodeBlockStartedAnalyzer.cs **Notes:** Don't fire if the parameter comes from an interface you're implementing or a virtual method you're overriding.
main
port fxcop rule reviewunusedparameters title review unused parameters description a method signature includes a parameter that is not used in the method body dependency none can be based on notes don t fire if the parameter comes from an interface you re implementing or a virtual method you re overriding
1
380
3,412,681,146
IssuesEvent
2015-12-06 02:42:01
spyder-ide/spyder
https://api.github.com/repos/spyder-ide/spyder
closed
[Help]plugin installation
Enhancement Maintainability
Hi! Please help me, How to install plugins(autopep8 - line_profiler- memeory_profiler) in spyder. I'm using arch linux and it's official spyder packages. Installation via pip has error in compatible version.
True
[Help]plugin installation - Hi! Please help me, How to install plugins(autopep8 - line_profiler- memeory_profiler) in spyder. I'm using arch linux and it's official spyder packages. Installation via pip has error in compatible version.
main
plugin installation hi please help me how to install plugins line profiler memeory profiler in spyder i m using arch linux and it s official spyder packages installation via pip has error in compatible version
1
4,199
20,601,583,928
IssuesEvent
2022-03-06 10:50:01
truecharts/apps
https://api.github.com/repos/truecharts/apps
reopened
Add Requestrr app
New App Request No-Maintainer
Please add requesterr docker app. https://github.com/darkalfx/requestrr This application allows users to request content from Sonarr & Radarr through Discord.
True
Add Requestrr app - Please add requesterr docker app. https://github.com/darkalfx/requestrr This application allows users to request content from Sonarr & Radarr through Discord.
main
add requestrr app please add requesterr docker app this application allows users to request content from sonarr radarr through discord
1
332,689
24,347,920,600
IssuesEvent
2022-10-02 15:14:54
ICEI-PUC-Minas-PMV-ADS/pmv-ads-2022-2-e1-proj-web-t7-planejamento-orcamentario
https://api.github.com/repos/ICEI-PUC-Minas-PMV-ADS/pmv-ads-2022-2-e1-proj-web-t7-planejamento-orcamentario
reopened
Contextualizar o projeto
documentation
documentação de contexto é um texto descritivo com a visão geral do projeto abordado, que inclui o contexto, o problema, os objetivos, a justificativa e o público-alvo do projeto.
1.0
Contextualizar o projeto - documentação de contexto é um texto descritivo com a visão geral do projeto abordado, que inclui o contexto, o problema, os objetivos, a justificativa e o público-alvo do projeto.
non_main
contextualizar o projeto documentação de contexto é um texto descritivo com a visão geral do projeto abordado que inclui o contexto o problema os objetivos a justificativa e o público alvo do projeto
0
1
2,490,913,530
IssuesEvent
2015-01-02 21:37:02
hamcrest/JavaHamcrest
https://api.github.com/repos/hamcrest/JavaHamcrest
closed
Decouple the evolution of Hamcrest and JUnit
maintainability
Evolution of Hamcrest and JUnit has been held back by the dependencies between the two projects. If they can be decoupled, both projects can move forward with less coordination required. Plan: * create a new project (hamcrest-junit, say) that contains a copy of the JUnit code that uses Hamcrest, repackaged somewhere under org.hamcrest (org.hamcrest.junit, say) so that they can live side-by-side with existing JUnit and Hamcrest code. * Deprecate existing code in JUnit that depends on Hamcrest * Perhaps deprecate Hamcrest's MatcherAssert class, now that it is duplicated in the hamcrest-junit module. * Eventually delete the deprecated code. * Run a CI "matrix" that reports the compatibility of Hamcrest and JUnit versions. The uses of Hamcrest in JUnit are: - Assert and Assume in org.junit, for which we'd have to duplicate the methods using matchers as (for instance) MatcherAssert and MatcherAssume - ErrorCollector and ExpectedException (and a package-protected supporting class) in org.junit.rules, which we'd have to duplicate. - several classes in the org.junit.internal packages, which client code should not be using
True
Decouple the evolution of Hamcrest and JUnit - Evolution of Hamcrest and JUnit has been held back by the dependencies between the two projects. If they can be decoupled, both projects can move forward with less coordination required. Plan: * create a new project (hamcrest-junit, say) that contains a copy of the JUnit code that uses Hamcrest, repackaged somewhere under org.hamcrest (org.hamcrest.junit, say) so that they can live side-by-side with existing JUnit and Hamcrest code. * Deprecate existing code in JUnit that depends on Hamcrest * Perhaps deprecate Hamcrest's MatcherAssert class, now that it is duplicated in the hamcrest-junit module. * Eventually delete the deprecated code. * Run a CI "matrix" that reports the compatibility of Hamcrest and JUnit versions. The uses of Hamcrest in JUnit are: - Assert and Assume in org.junit, for which we'd have to duplicate the methods using matchers as (for instance) MatcherAssert and MatcherAssume - ErrorCollector and ExpectedException (and a package-protected supporting class) in org.junit.rules, which we'd have to duplicate. - several classes in the org.junit.internal packages, which client code should not be using
main
decouple the evolution of hamcrest and junit evolution of hamcrest and junit has been held back by the dependencies between the two projects if they can be decoupled both projects can move forward with less coordination required plan create a new project hamcrest junit say that contains a copy of the junit code that uses hamcrest repackaged somewhere under org hamcrest org hamcrest junit say so that they can live side by side with existing junit and hamcrest code deprecate existing code in junit that depends on hamcrest perhaps deprecate hamcrest s matcherassert class now that it is duplicated in the hamcrest junit module eventually delete the deprecated code run a ci matrix that reports the compatibility of hamcrest and junit versions the uses of hamcrest in junit are assert and assume in org junit for which we d have to duplicate the methods using matchers as for instance matcherassert and matcherassume errorcollector and expectedexception and a package protected supporting class in org junit rules which we d have to duplicate several classes in the org junit internal packages which client code should not be using
1
72,281
3,377,859,400
IssuesEvent
2015-11-25 07:28:26
xcat2/xcat-core
https://api.github.com/repos/xcat2/xcat-core
closed
[FVT]:rflash return Firmware upgrade procedure failed on BMC firmware update on Openpower machine
priority:normal status:pending type:bug
xCAT 2.11 ``` root@:/firmware# lsxcatd -v Version 2.11 (git commit 51316df775cb2e410708aab98c3afe9c8ef6ea1d, built Thu Nov 19 03:16:29 EST 2015) ``` ``` root@:/firmware# time rflash c910f05c37 /firmware/8335_810.1543.20151021b_update.hpm c910f05c37: rflash started, please wait....... c910f05c37: Error: PICMG HPM.1 Upgrade Agent 1.0.9: Setting large buffer to 30000 Validating firmware image integrity...OK Performing preparation stage...OK Performing upgrade stage: ------------------------------------------------------------------------------- |ID | Name | Versions | % | | | | Active | Backup | File | | |----|-------------|-----------------|-----------------|-----------------|----| |* 2|BIOS | 0.00 00000000 | ---.-- -------- | 1.01 02010701 | 0%|S ------------------------------------------------------------------------------- (*) Component requires Payload Cold Reset Firmware upgrade procedure failed real 0m55.035s user 0m0.090s sys 0m0.008s ```
1.0
[FVT]:rflash return Firmware upgrade procedure failed on BMC firmware update on Openpower machine - xCAT 2.11 ``` root@:/firmware# lsxcatd -v Version 2.11 (git commit 51316df775cb2e410708aab98c3afe9c8ef6ea1d, built Thu Nov 19 03:16:29 EST 2015) ``` ``` root@:/firmware# time rflash c910f05c37 /firmware/8335_810.1543.20151021b_update.hpm c910f05c37: rflash started, please wait....... c910f05c37: Error: PICMG HPM.1 Upgrade Agent 1.0.9: Setting large buffer to 30000 Validating firmware image integrity...OK Performing preparation stage...OK Performing upgrade stage: ------------------------------------------------------------------------------- |ID | Name | Versions | % | | | | Active | Backup | File | | |----|-------------|-----------------|-----------------|-----------------|----| |* 2|BIOS | 0.00 00000000 | ---.-- -------- | 1.01 02010701 | 0%|S ------------------------------------------------------------------------------- (*) Component requires Payload Cold Reset Firmware upgrade procedure failed real 0m55.035s user 0m0.090s sys 0m0.008s ```
non_main
rflash return firmware upgrade procedure failed on bmc firmware update on openpower machine xcat root firmware lsxcatd v version git commit built thu nov est root firmware time rflash firmware update hpm rflash started please wait error picmg hpm upgrade agent setting large buffer to validating firmware image integrity ok performing preparation stage ok performing upgrade stage id name versions active backup file bios s component requires payload cold reset firmware upgrade procedure failed real user sys
0
4,346
21,931,378,728
IssuesEvent
2022-05-23 10:02:05
ipld/go-ipld-prime
https://api.github.com/repos/ipld/go-ipld-prime
closed
Representation of kinded Unions
need/maintainer-input
I am currently working with IPLD using go-ipld-prime and am looking to implement some custom ADLs. While experimenting with https://github.com/ipld/go-ipld-adl-hamt and with generating code based on my own custom schema it seems that the way Unions with representation set to kinded are serialized is inconsistent with the specs at https://ipld.io/docs/schemas/features/representation-strategies/#union-kinded-representation. That spec seems to indicate that the union should be represented as one of the possible types without any wrapping, but the code generated by this project results in the content of the union being wrapped in a map with the key of the type name and a value. Can someone explain this difference?
True
Representation of kinded Unions - I am currently working with IPLD using go-ipld-prime and am looking to implement some custom ADLs. While experimenting with https://github.com/ipld/go-ipld-adl-hamt and with generating code based on my own custom schema it seems that the way Unions with representation set to kinded are serialized is inconsistent with the specs at https://ipld.io/docs/schemas/features/representation-strategies/#union-kinded-representation. That spec seems to indicate that the union should be represented as one of the possible types without any wrapping, but the code generated by this project results in the content of the union being wrapped in a map with the key of the type name and a value. Can someone explain this difference?
main
representation of kinded unions i am currently working with ipld using go ipld prime and am looking to implement some custom adls while experimenting with and with generating code based on my own custom schema it seems that the way unions with representation set to kinded are serialized is inconsistent with the specs at that spec seems to indicate that the union should be represented as one of the possible types without any wrapping but the code generated by this project results in the content of the union being wrapped in a map with the key of the type name and a value can someone explain this difference
1
624,159
19,688,260,891
IssuesEvent
2022-01-12 01:59:53
AlecM33/Werewolf
https://api.github.com/repos/AlecM33/Werewolf
closed
Card quantities reset when a custom role is added
invalid priority
All card quantities are reset to 0 when a custom card is added, which can be frustrating for the user if they already constructed a deck before adding the role. Change this to preserve the quantity values from before the card was added.
1.0
Card quantities reset when a custom role is added - All card quantities are reset to 0 when a custom card is added, which can be frustrating for the user if they already constructed a deck before adding the role. Change this to preserve the quantity values from before the card was added.
non_main
card quantities reset when a custom role is added all card quantities are reset to when a custom card is added which can be frustrating for the user if they already constructed a deck before adding the role change this to preserve the quantity values from before the card was added
0
20,638
3,830,009,822
IssuesEvent
2016-03-31 13:10:16
Gapminder/dollar-street-pages
https://api.github.com/repos/Gapminder/dollar-street-pages
closed
All photographers page: make the search box search for countries
enhancement Iteration 3 Tested
Make the search box search for countries as well, not only photographer names
1.0
All photographers page: make the search box search for countries - Make the search box search for countries as well, not only photographer names
non_main
all photographers page make the search box search for countries make the search box search for countries as well not only photographer names
0
2,142
7,369,802,523
IssuesEvent
2018-03-13 05:07:53
openaddresses/submit-service
https://api.github.com/repos/openaddresses/submit-service
opened
Move env settings checks to individual endpoints
/download /maintainers /submit v1
Currently the process.env settings are checked for [here](https://github.com/openaddresses/submit-service/blob/master/index.js#L4-L12). Since the endpoints run as lambdas, it makes more sense for the routes that depend on the values to check for them as preconditions and return HTTP 500 if the values aren't there.
True
Move env settings checks to individual endpoints - Currently the process.env settings are checked for [here](https://github.com/openaddresses/submit-service/blob/master/index.js#L4-L12). Since the endpoints run as lambdas, it makes more sense for the routes that depend on the values to check for them as preconditions and return HTTP 500 if the values aren't there.
main
move env settings checks to individual endpoints currently the process env settings are checked for since the endpoints run as lambdas it makes more sense for the routes that depend on the values to check for them as preconditions and return http if the values aren t there
1
77,773
10,021,013,676
IssuesEvent
2019-07-16 13:50:25
cseeger-epages/mail2most
https://api.github.com/repos/cseeger-epages/mail2most
closed
config example for multiple profiles
documentation
**Is your feature request related to a problem? Please describe.** To understand the use of multiple profiles. **Describe the solution you'd like** Add a config example for multiple profiles and references in the documentation
1.0
config example for multiple profiles - **Is your feature request related to a problem? Please describe.** To understand the use of multiple profiles. **Describe the solution you'd like** Add a config example for multiple profiles and references in the documentation
non_main
config example for multiple profiles is your feature request related to a problem please describe to understand the use of multiple profiles describe the solution you d like add a config example for multiple profiles and references in the documentation
0
35,885
12,394,213,112
IssuesEvent
2020-05-20 16:34:36
dotnet/aspnetcore
https://api.github.com/repos/dotnet/aspnetcore
opened
Add debug logging to Certificate authentication when it short circuits due to scheme
area-security
Creating from part of the investigation in https://github.com/dotnet/aspnetcore/issues/21993#issuecomment-631584333 Certificate auth will [short circuit](https://github.com/dotnet/aspnetcore/blob/master/src/Security/Authentication/Certificate/src/CertificateAuthenticationHandler.cs#L47) if the request is considered to be HTTP, with no hint that it's doing so. A debug log message will make this easier to diagnose.
True
Add debug logging to Certificate authentication when it short circuits due to scheme - Creating from part of the investigation in https://github.com/dotnet/aspnetcore/issues/21993#issuecomment-631584333 Certificate auth will [short circuit](https://github.com/dotnet/aspnetcore/blob/master/src/Security/Authentication/Certificate/src/CertificateAuthenticationHandler.cs#L47) if the request is considered to be HTTP, with no hint that it's doing so. A debug log message will make this easier to diagnose.
non_main
add debug logging to certificate authentication when it short circuits due to scheme creating from part of the investigation in certificate auth will if the request is considered to be http with no hint that it s doing so a debug log message will make this easier to diagnose
0
286,247
8,785,692,539
IssuesEvent
2018-12-20 13:46:41
neoclide/coc.nvim
https://api.github.com/repos/neoclide/coc.nvim
closed
Completion causes cursor to vanish in Vim
help wanted low priority
**Result from CocInfo** Run `:CocInfo` command and paste the content below. ``` ## versions vim version: VIM - Vi IMproved 8.1 (2018 May 18, compiled Dec 8 2018 11:23:48) node version: v11.5.0 coc.nvim version: 0.0.40 term: dumb platform: linux ## Error messages ## Output channel: prettier ``` **Describe the bug** A clear and concise description of what the bug is. Completion in Vim is a bit flaky/flickered compared to NeoVim, and it causes the cursor vanish, while that never happens in NeoVim. **To Reproduce** Steps to reproduce the behavior: 1. Just type with completion in Vim. **Screenshots** If applicable, add screenshots to help explain your problem. ![simplescreenrecorder](https://user-images.githubusercontent.com/1269815/50256613-3fd41f00-03de-11e9-9713-ed6cfda83aad.gif)
1.0
Completion causes cursor to vanish in Vim - **Result from CocInfo** Run `:CocInfo` command and paste the content below. ``` ## versions vim version: VIM - Vi IMproved 8.1 (2018 May 18, compiled Dec 8 2018 11:23:48) node version: v11.5.0 coc.nvim version: 0.0.40 term: dumb platform: linux ## Error messages ## Output channel: prettier ``` **Describe the bug** A clear and concise description of what the bug is. Completion in Vim is a bit flaky/flickered compared to NeoVim, and it causes the cursor vanish, while that never happens in NeoVim. **To Reproduce** Steps to reproduce the behavior: 1. Just type with completion in Vim. **Screenshots** If applicable, add screenshots to help explain your problem. ![simplescreenrecorder](https://user-images.githubusercontent.com/1269815/50256613-3fd41f00-03de-11e9-9713-ed6cfda83aad.gif)
non_main
completion causes cursor to vanish in vim result from cocinfo run cocinfo command and paste the content below versions vim version vim vi improved may compiled dec node version coc nvim version term dumb platform linux error messages output channel prettier describe the bug a clear and concise description of what the bug is completion in vim is a bit flaky flickered compared to neovim and it causes the cursor vanish while that never happens in neovim to reproduce steps to reproduce the behavior just type with completion in vim screenshots if applicable add screenshots to help explain your problem
0
165,941
26,254,719,901
IssuesEvent
2023-01-05 23:01:35
sul-dlss/vt-arclight
https://api.github.com/repos/sul-dlss/vt-arclight
closed
text change in message shown upon searching text
design needed analysis needed content development
Suggestion by Tom in the demo from 12/15/2022: remove reference to "quality of the scans" and instead refer to OCR. Suggest this draft text: "Your query was matched to the scanned text of all text documents in the collection. Due to the potential for poor quality during the Optical Character Recognition (OCR) process, these search results may include incorrect matches." <img width="1102" alt="Screen Shot 2022-12-15 at 4 18 35 PM" src="https://user-images.githubusercontent.com/3269689/207993895-5860e729-9f45-4e59-9742-5e25f72bbe89.png">
1.0
text change in message shown upon searching text - Suggestion by Tom in the demo from 12/15/2022: remove reference to "quality of the scans" and instead refer to OCR. Suggest this draft text: "Your query was matched to the scanned text of all text documents in the collection. Due to the potential for poor quality during the Optical Character Recognition (OCR) process, these search results may include incorrect matches." <img width="1102" alt="Screen Shot 2022-12-15 at 4 18 35 PM" src="https://user-images.githubusercontent.com/3269689/207993895-5860e729-9f45-4e59-9742-5e25f72bbe89.png">
non_main
text change in message shown upon searching text suggestion by tom in the demo from remove reference to quality of the scans and instead refer to ocr suggest this draft text your query was matched to the scanned text of all text documents in the collection due to the potential for poor quality during the optical character recognition ocr process these search results may include incorrect matches img width alt screen shot at pm src
0
379,079
11,215,286,240
IssuesEvent
2020-01-07 01:34:14
TTT-2/TTT2
https://api.github.com/repos/TTT-2/TTT2
closed
Bug with fast reload
bug low priority stale
Есть критический баг, с быстрой перезарядкой. Смысл его в том, что ты можешь стрелять, выстрелить целую обойму, нажав перезарядку или она начнется сама и выкинуть оружие, подобрав его, оно уже будет перезаряжено, что позволит снова стрелять.
1.0
Bug with fast reload - Есть критический баг, с быстрой перезарядкой. Смысл его в том, что ты можешь стрелять, выстрелить целую обойму, нажав перезарядку или она начнется сама и выкинуть оружие, подобрав его, оно уже будет перезаряжено, что позволит снова стрелять.
non_main
bug with fast reload есть критический баг с быстрой перезарядкой смысл его в том что ты можешь стрелять выстрелить целую обойму нажав перезарядку или она начнется сама и выкинуть оружие подобрав его оно уже будет перезаряжено что позволит снова стрелять
0
781
4,386,361,139
IssuesEvent
2016-08-08 12:34:40
ansible/ansible-modules-extras
https://api.github.com/repos/ansible/ansible-modules-extras
closed
seport configuration before selinux reboot
bug_report waiting_on_maintainer
<!--- Verify first that your issue/request is not already reported in GitHub --> This issue has not previously been reported on GitHub ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> ansible-modules-extras/blob/devel/system/seport.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT Ansible Host: CentOS Linux release 7.2.1511 (Core) Ansible Target: Red Hat Enterprise Linux Server release 5.6 (Tikanga) ##### SUMMARY When a new system comes online, there's a long list of things to do, but a critical point is: 1. ensure that SElinux is turned on 2. use `seport` to open custom SSH port under SElinux 3. update SSH configuration for custom port 4. reboot server if necessary for SElinux configuration Problem is you can't execute this with ansible seport as currently written because seport will fail to add a port to the whitelist if you just executed `selinux: policy=targeted state=enforcing` because the system must be rebooted first... Yet if you haven't updated the whitelist, you'll never be able to reconnect after the reboot. ##### STEPS TO REPRODUCE If you run this on a system that doesn't yet have SElinux enabled, it will fail with error: SELinux is disabled on this host. ``` - name: prep for SElinux dependancies yum: state=present name={{ item }} with_items: - libselinux-python - policycoreutils-python - name: enable SELinux selinux: policy=targeted state=enforcing - name: make sure we've got YOUR PORT for SSH open via SELinux seport: ports=YOUR PORT proto=tcp setype=ssh_port_t state=present - name: standard SSH configuration lineinfile: dest: /etc/ssh/sshd_config state: present create: yes regexp: "{{ item.regexp }}" line: "{{ item.line }}" # validate: '/usr/sbin/sshd -t %s' with_items: - { regexp: '^#?Port\s', line: 'Port YOURPORT' } - { regexp: '^#?PermitRootLogin\s', line: 'PermitRootLogin no' } - { regexp: '^#?X11Forwarding\s', line: 'X11Forwarding no' } notify: restart ssh ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS Personally I think the command should work just fine on a system that has SElinux "enabled" via the configuration options, even if the actual SElinux process isn't yet engaged (via reading the configuration at reboot). Note that even when I commented out the selinux check and tried to run it on the target host, I got: # if not selinux.is_selinux_enabled(): # module.fail_json(msg="SELinux is disabled on this host.") `sudo python ./tmp/ansible-tmp-1461064894.01-221154378874892/seport Traceback (most recent call last): File "./tmp/ansible-tmp-1461064894.01-221154378874892/seport", line 2220, in ? main() File "./tmp/ansible-tmp-1461064894.01-221154378874892/seport", line 252, in main result['changed'] = semanage_port_add(module, ports, proto, setype, do_reload) File "./tmp/ansible-tmp-1461064894.01-221154378874892/seport", line 137, in semanage_port_add seport.set_reload(do_reload) AttributeError: portRecords instance has no attribute 'set_reload'` Yet when I run the command directly in the shell with /usr/sbin/semanage port everything works properly. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> ``` fatal: [SOMEHOST]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"ports": "YOUR PORT", "proto": "tcp", "reload": true, "setype": "ssh_port_t", "state": "present"}, "module_name": "seport"}, "msg": "SELinux is disabled on this host."} ```
True
seport configuration before selinux reboot - <!--- Verify first that your issue/request is not already reported in GitHub --> This issue has not previously been reported on GitHub ##### ISSUE TYPE <!--- Pick one below and delete the rest: --> - Bug Report ##### COMPONENT NAME <!--- Name of the plugin/module/task --> ansible-modules-extras/blob/devel/system/seport.py ##### ANSIBLE VERSION ``` ansible 2.0.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides ``` ##### CONFIGURATION <!--- Mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT Ansible Host: CentOS Linux release 7.2.1511 (Core) Ansible Target: Red Hat Enterprise Linux Server release 5.6 (Tikanga) ##### SUMMARY When a new system comes online, there's a long list of things to do, but a critical point is: 1. ensure that SElinux is turned on 2. use `seport` to open custom SSH port under SElinux 3. update SSH configuration for custom port 4. reboot server if necessary for SElinux configuration Problem is you can't execute this with ansible seport as currently written because seport will fail to add a port to the whitelist if you just executed `selinux: policy=targeted state=enforcing` because the system must be rebooted first... Yet if you haven't updated the whitelist, you'll never be able to reconnect after the reboot. ##### STEPS TO REPRODUCE If you run this on a system that doesn't yet have SElinux enabled, it will fail with error: SELinux is disabled on this host. ``` - name: prep for SElinux dependancies yum: state=present name={{ item }} with_items: - libselinux-python - policycoreutils-python - name: enable SELinux selinux: policy=targeted state=enforcing - name: make sure we've got YOUR PORT for SSH open via SELinux seport: ports=YOUR PORT proto=tcp setype=ssh_port_t state=present - name: standard SSH configuration lineinfile: dest: /etc/ssh/sshd_config state: present create: yes regexp: "{{ item.regexp }}" line: "{{ item.line }}" # validate: '/usr/sbin/sshd -t %s' with_items: - { regexp: '^#?Port\s', line: 'Port YOURPORT' } - { regexp: '^#?PermitRootLogin\s', line: 'PermitRootLogin no' } - { regexp: '^#?X11Forwarding\s', line: 'X11Forwarding no' } notify: restart ssh ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS Personally I think the command should work just fine on a system that has SElinux "enabled" via the configuration options, even if the actual SElinux process isn't yet engaged (via reading the configuration at reboot). Note that even when I commented out the selinux check and tried to run it on the target host, I got: # if not selinux.is_selinux_enabled(): # module.fail_json(msg="SELinux is disabled on this host.") `sudo python ./tmp/ansible-tmp-1461064894.01-221154378874892/seport Traceback (most recent call last): File "./tmp/ansible-tmp-1461064894.01-221154378874892/seport", line 2220, in ? main() File "./tmp/ansible-tmp-1461064894.01-221154378874892/seport", line 252, in main result['changed'] = semanage_port_add(module, ports, proto, setype, do_reload) File "./tmp/ansible-tmp-1461064894.01-221154378874892/seport", line 137, in semanage_port_add seport.set_reload(do_reload) AttributeError: portRecords instance has no attribute 'set_reload'` Yet when I run the command directly in the shell with /usr/sbin/semanage port everything works properly. ##### ACTUAL RESULTS <!--- What actually happened? If possible run with high verbosity (-vvvv) --> ``` fatal: [SOMEHOST]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"ports": "YOUR PORT", "proto": "tcp", "reload": true, "setype": "ssh_port_t", "state": "present"}, "module_name": "seport"}, "msg": "SELinux is disabled on this host."} ```
main
seport configuration before selinux reboot this issue has not previously been reported on github issue type bug report component name ansible modules extras blob devel system seport py ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment ansible host centos linux release core ansible target red hat enterprise linux server release tikanga summary when a new system comes online there s a long list of things to do but a critical point is ensure that selinux is turned on use seport to open custom ssh port under selinux update ssh configuration for custom port reboot server if necessary for selinux configuration problem is you can t execute this with ansible seport as currently written because seport will fail to add a port to the whitelist if you just executed selinux policy targeted state enforcing because the system must be rebooted first yet if you haven t updated the whitelist you ll never be able to reconnect after the reboot steps to reproduce if you run this on a system that doesn t yet have selinux enabled it will fail with error selinux is disabled on this host name prep for selinux dependancies yum state present name item with items libselinux python policycoreutils python name enable selinux selinux policy targeted state enforcing name make sure we ve got your port for ssh open via selinux seport ports your port proto tcp setype ssh port t state present name standard ssh configuration lineinfile dest etc ssh sshd config state present create yes regexp item regexp line item line validate usr sbin sshd t s with items regexp port s line port yourport regexp permitrootlogin s line permitrootlogin no regexp s line no notify restart ssh expected results personally i think the command should work just fine on a system that has selinux enabled via the configuration options even if the actual selinux process isn t yet engaged via reading the configuration at reboot note that even when i commented out the selinux check and tried to run it on the target host i got if not selinux is selinux enabled module fail json msg selinux is disabled on this host sudo python tmp ansible tmp seport traceback most recent call last file tmp ansible tmp seport line in main file tmp ansible tmp seport line in main result semanage port add module ports proto setype do reload file tmp ansible tmp seport line in semanage port add seport set reload do reload attributeerror portrecords instance has no attribute set reload yet when i run the command directly in the shell with usr sbin semanage port everything works properly actual results fatal failed changed false failed true invocation module args ports your port proto tcp reload true setype ssh port t state present module name seport msg selinux is disabled on this host
1
171,457
13,233,999,585
IssuesEvent
2020-08-18 15:36:50
avocado-framework/avocado
https://api.github.com/repos/avocado-framework/avocado
closed
Python-unittest: Avocado do not "nrun" tests in the current directory
nrun2run test runner and core APIs
Running the python-unittest from the `unit` directory with `avocado run` works as expected: ``` [wrampazz@wrampazz unit]$ pwd /home/wrampazz/src/avocado/avocado.dev/selftests/unit [wrampazz@wrampazz unit]$ avocado run test_plugin_assets.py /home/wrampazz/src/avocado/avocado.dev/avocado/plugins/run.py:296: FutureWarning: The following arguments will be changed to boolean soon: sysinfo, output-check, failfast and keep-tmp. warnings.warn("The following arguments will be changed to boolean soon: " JOB ID : 4b4bfb3b65595e7450036ff4265e85550663d819 JOB LOG : /home/wrampazz/avocado/job-results/job-2020-05-22T09.47-4b4bfb3/job.log (01/10) test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess_fail: PASS (0.22 s) (02/10) test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess: PASS (0.22 s) (03/10) test_plugin_assets.AssetsPlugin.test_fetch_assets_fail: PASS (0.22 s) (04/10) test_plugin_assets.AssetsPlugin.test_fetch_assets_empty_calls: PASS (0.22 s) (05/10) test_plugin_assets.AssetsClass.test_visit_classdef_valid_class: PASS (0.23 s) (06/10) test_plugin_assets.AssetsClass.test_visit_classdef_invalid_class: PASS (0.22 s) (07/10) test_plugin_assets.AssetsClass.test_visit_fuctiondef_valid_class: PASS (0.22 s) (08/10) test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class: PASS (0.22 s) (09/10) test_plugin_assets.AssetsClass.test_visit_assign_valid_class_method: PASS (0.23 s) (10/10) test_plugin_assets.AssetsClass.test_visit_assign_invalid_class_method: PASS (0.23 s) RESULTS : PASS 10 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB HTML : /home/wrampazz/avocado/job-results/job-2020-05-22T09.47-4b4bfb3/results.html JOB TIME : 2.72 s ``` Running the same python-unittest from the `unit` directory with `avocado nrun` does not work as expected: ``` [wrampazz@wrampazz unit]$ avocado nrun test_plugin_assets.py Status server started at: 127.0.0.1:8888 3-test_plugin_assets.AssetsPlugin.test_fetch_assets_fail spawned and alive 5-test_plugin_assets.AssetsClass.test_visit_classdef_valid_class spawned and alive 6-test_plugin_assets.AssetsClass.test_visit_classdef_invalid_class spawned and alive 10-test_plugin_assets.AssetsClass.test_visit_assign_invalid_class_method spawned and alive 8-test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class spawned and alive 2-test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess spawned and alive 9-test_plugin_assets.AssetsClass.test_visit_assign_valid_class_method spawned and alive 7-test_plugin_assets.AssetsClass.test_visit_fuctiondef_valid_class spawned and alive 4-test_plugin_assets.AssetsPlugin.test_fetch_assets_empty_calls spawned and alive 1-test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess_fail spawned and alive Finished spawning tasks Status server: exiting due to all tasks finished Tasks result summary: {'error': 10} Tasks ended with 'error': 6-test_plugin_assets.AssetsClass.test_visit_classdef_invalid_class, 9-test_plugin_assets.AssetsClass.test_visit_assign_valid_class_method, 5-test_plugin_assets.AssetsClass.test_visit_classdef_valid_class, 2-test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess, 1-test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess_fail, 3-test_plugin_assets.AssetsPlugin.test_fetch_assets_fail, 7-test_plugin_assets.AssetsClass.test_visit_fuctiondef_valid_class, 10-test_plugin_assets.AssetsClass.test_visit_assign_invalid_class_method, 8-test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class, 4-test_plugin_assets.AssetsPlugin.test_fetch_assets_empty_calls [wrampazz@wrampazz unit]$ ``` Running with verbose shows `No module named <test_name>` for each test, here is one as an example: ``` Task started: 8-test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class. Outputdir: /var/tmp/.avocado-task-przaad_k Task complete (error): 8-test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class Task 8-test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class output: ====================================================================== ERROR: test_plugin_assets (unittest.loader._FailedTest) ---------------------------------------------------------------------- ImportError: Failed to import test module: test_plugin_assets Traceback (most recent call last): File "/usr/lib64/python3.8/unittest/loader.py", line 154, in loadTestsFromName module = __import__(module_name) ModuleNotFoundError: No module named 'test_plugin_assets' ---------------------------------------------------------------------- Ran 1 test in 0.000s FAILED (errors=1) ``` But running the python-unittest using `avocado nrun` from the Avocado root project directory works: ``` [wrampazz@wrampazz avocado.dev]$ pwd /home/wrampazz/src/avocado/avocado.dev [wrampazz@wrampazz avocado.dev]$ avocado nrun selftests/unit/test_plugin_assets.py Status server started at: 127.0.0.1:8888 3-selftests.unit.test_plugin_assets.AssetsPlugin.test_fetch_assets_fail spawned and alive 5-selftests.unit.test_plugin_assets.AssetsClass.test_visit_classdef_valid_class spawned and alive 4-selftests.unit.test_plugin_assets.AssetsPlugin.test_fetch_assets_empty_calls spawned and alive 7-selftests.unit.test_plugin_assets.AssetsClass.test_visit_fuctiondef_valid_class spawned and alive 2-selftests.unit.test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess spawned and alive 8-selftests.unit.test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class spawned and alive 10-selftests.unit.test_plugin_assets.AssetsClass.test_visit_assign_invalid_class_method spawned and alive 9-selftests.unit.test_plugin_assets.AssetsClass.test_visit_assign_valid_class_method spawned and alive 6-selftests.unit.test_plugin_assets.AssetsClass.test_visit_classdef_invalid_class spawned and alive 1-selftests.unit.test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess_fail spawned and alive Finished spawning tasks Status server: exiting due to all tasks finished Tasks result summary: {'pass': 10} ``` Avocado `nrun` should behave the same way as the `run` command when running python-unittests from different directories.
1.0
Python-unittest: Avocado do not "nrun" tests in the current directory - Running the python-unittest from the `unit` directory with `avocado run` works as expected: ``` [wrampazz@wrampazz unit]$ pwd /home/wrampazz/src/avocado/avocado.dev/selftests/unit [wrampazz@wrampazz unit]$ avocado run test_plugin_assets.py /home/wrampazz/src/avocado/avocado.dev/avocado/plugins/run.py:296: FutureWarning: The following arguments will be changed to boolean soon: sysinfo, output-check, failfast and keep-tmp. warnings.warn("The following arguments will be changed to boolean soon: " JOB ID : 4b4bfb3b65595e7450036ff4265e85550663d819 JOB LOG : /home/wrampazz/avocado/job-results/job-2020-05-22T09.47-4b4bfb3/job.log (01/10) test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess_fail: PASS (0.22 s) (02/10) test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess: PASS (0.22 s) (03/10) test_plugin_assets.AssetsPlugin.test_fetch_assets_fail: PASS (0.22 s) (04/10) test_plugin_assets.AssetsPlugin.test_fetch_assets_empty_calls: PASS (0.22 s) (05/10) test_plugin_assets.AssetsClass.test_visit_classdef_valid_class: PASS (0.23 s) (06/10) test_plugin_assets.AssetsClass.test_visit_classdef_invalid_class: PASS (0.22 s) (07/10) test_plugin_assets.AssetsClass.test_visit_fuctiondef_valid_class: PASS (0.22 s) (08/10) test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class: PASS (0.22 s) (09/10) test_plugin_assets.AssetsClass.test_visit_assign_valid_class_method: PASS (0.23 s) (10/10) test_plugin_assets.AssetsClass.test_visit_assign_invalid_class_method: PASS (0.23 s) RESULTS : PASS 10 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB HTML : /home/wrampazz/avocado/job-results/job-2020-05-22T09.47-4b4bfb3/results.html JOB TIME : 2.72 s ``` Running the same python-unittest from the `unit` directory with `avocado nrun` does not work as expected: ``` [wrampazz@wrampazz unit]$ avocado nrun test_plugin_assets.py Status server started at: 127.0.0.1:8888 3-test_plugin_assets.AssetsPlugin.test_fetch_assets_fail spawned and alive 5-test_plugin_assets.AssetsClass.test_visit_classdef_valid_class spawned and alive 6-test_plugin_assets.AssetsClass.test_visit_classdef_invalid_class spawned and alive 10-test_plugin_assets.AssetsClass.test_visit_assign_invalid_class_method spawned and alive 8-test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class spawned and alive 2-test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess spawned and alive 9-test_plugin_assets.AssetsClass.test_visit_assign_valid_class_method spawned and alive 7-test_plugin_assets.AssetsClass.test_visit_fuctiondef_valid_class spawned and alive 4-test_plugin_assets.AssetsPlugin.test_fetch_assets_empty_calls spawned and alive 1-test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess_fail spawned and alive Finished spawning tasks Status server: exiting due to all tasks finished Tasks result summary: {'error': 10} Tasks ended with 'error': 6-test_plugin_assets.AssetsClass.test_visit_classdef_invalid_class, 9-test_plugin_assets.AssetsClass.test_visit_assign_valid_class_method, 5-test_plugin_assets.AssetsClass.test_visit_classdef_valid_class, 2-test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess, 1-test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess_fail, 3-test_plugin_assets.AssetsPlugin.test_fetch_assets_fail, 7-test_plugin_assets.AssetsClass.test_visit_fuctiondef_valid_class, 10-test_plugin_assets.AssetsClass.test_visit_assign_invalid_class_method, 8-test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class, 4-test_plugin_assets.AssetsPlugin.test_fetch_assets_empty_calls [wrampazz@wrampazz unit]$ ``` Running with verbose shows `No module named <test_name>` for each test, here is one as an example: ``` Task started: 8-test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class. Outputdir: /var/tmp/.avocado-task-przaad_k Task complete (error): 8-test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class Task 8-test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class output: ====================================================================== ERROR: test_plugin_assets (unittest.loader._FailedTest) ---------------------------------------------------------------------- ImportError: Failed to import test module: test_plugin_assets Traceback (most recent call last): File "/usr/lib64/python3.8/unittest/loader.py", line 154, in loadTestsFromName module = __import__(module_name) ModuleNotFoundError: No module named 'test_plugin_assets' ---------------------------------------------------------------------- Ran 1 test in 0.000s FAILED (errors=1) ``` But running the python-unittest using `avocado nrun` from the Avocado root project directory works: ``` [wrampazz@wrampazz avocado.dev]$ pwd /home/wrampazz/src/avocado/avocado.dev [wrampazz@wrampazz avocado.dev]$ avocado nrun selftests/unit/test_plugin_assets.py Status server started at: 127.0.0.1:8888 3-selftests.unit.test_plugin_assets.AssetsPlugin.test_fetch_assets_fail spawned and alive 5-selftests.unit.test_plugin_assets.AssetsClass.test_visit_classdef_valid_class spawned and alive 4-selftests.unit.test_plugin_assets.AssetsPlugin.test_fetch_assets_empty_calls spawned and alive 7-selftests.unit.test_plugin_assets.AssetsClass.test_visit_fuctiondef_valid_class spawned and alive 2-selftests.unit.test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess spawned and alive 8-selftests.unit.test_plugin_assets.AssetsClass.test_visit_fuctiondef_invalid_class spawned and alive 10-selftests.unit.test_plugin_assets.AssetsClass.test_visit_assign_invalid_class_method spawned and alive 9-selftests.unit.test_plugin_assets.AssetsClass.test_visit_assign_valid_class_method spawned and alive 6-selftests.unit.test_plugin_assets.AssetsClass.test_visit_classdef_invalid_class spawned and alive 1-selftests.unit.test_plugin_assets.AssetsPlugin.test_fetch_assets_sucess_fail spawned and alive Finished spawning tasks Status server: exiting due to all tasks finished Tasks result summary: {'pass': 10} ``` Avocado `nrun` should behave the same way as the `run` command when running python-unittests from different directories.
non_main
python unittest avocado do not nrun tests in the current directory running the python unittest from the unit directory with avocado run works as expected pwd home wrampazz src avocado avocado dev selftests unit avocado run test plugin assets py home wrampazz src avocado avocado dev avocado plugins run py futurewarning the following arguments will be changed to boolean soon sysinfo output check failfast and keep tmp warnings warn the following arguments will be changed to boolean soon job id job log home wrampazz avocado job results job job log test plugin assets assetsplugin test fetch assets sucess fail pass s test plugin assets assetsplugin test fetch assets sucess pass s test plugin assets assetsplugin test fetch assets fail pass s test plugin assets assetsplugin test fetch assets empty calls pass s test plugin assets assetsclass test visit classdef valid class pass s test plugin assets assetsclass test visit classdef invalid class pass s test plugin assets assetsclass test visit fuctiondef valid class pass s test plugin assets assetsclass test visit fuctiondef invalid class pass s test plugin assets assetsclass test visit assign valid class method pass s test plugin assets assetsclass test visit assign invalid class method pass s results pass error fail skip warn interrupt cancel job html home wrampazz avocado job results job results html job time s running the same python unittest from the unit directory with avocado nrun does not work as expected avocado nrun test plugin assets py status server started at test plugin assets assetsplugin test fetch assets fail spawned and alive test plugin assets assetsclass test visit classdef valid class spawned and alive test plugin assets assetsclass test visit classdef invalid class spawned and alive test plugin assets assetsclass test visit assign invalid class method spawned and alive test plugin assets assetsclass test visit fuctiondef invalid class spawned and alive test plugin assets assetsplugin test fetch assets sucess spawned and alive test plugin assets assetsclass test visit assign valid class method spawned and alive test plugin assets assetsclass test visit fuctiondef valid class spawned and alive test plugin assets assetsplugin test fetch assets empty calls spawned and alive test plugin assets assetsplugin test fetch assets sucess fail spawned and alive finished spawning tasks status server exiting due to all tasks finished tasks result summary error tasks ended with error test plugin assets assetsclass test visit classdef invalid class test plugin assets assetsclass test visit assign valid class method test plugin assets assetsclass test visit classdef valid class test plugin assets assetsplugin test fetch assets sucess test plugin assets assetsplugin test fetch assets sucess fail test plugin assets assetsplugin test fetch assets fail test plugin assets assetsclass test visit fuctiondef valid class test plugin assets assetsclass test visit assign invalid class method test plugin assets assetsclass test visit fuctiondef invalid class test plugin assets assetsplugin test fetch assets empty calls running with verbose shows no module named for each test here is one as an example task started test plugin assets assetsclass test visit fuctiondef invalid class outputdir var tmp avocado task przaad k task complete error test plugin assets assetsclass test visit fuctiondef invalid class task test plugin assets assetsclass test visit fuctiondef invalid class output error test plugin assets unittest loader failedtest importerror failed to import test module test plugin assets traceback most recent call last file usr unittest loader py line in loadtestsfromname module import module name modulenotfounderror no module named test plugin assets ran test in failed errors but running the python unittest using avocado nrun from the avocado root project directory works pwd home wrampazz src avocado avocado dev avocado nrun selftests unit test plugin assets py status server started at selftests unit test plugin assets assetsplugin test fetch assets fail spawned and alive selftests unit test plugin assets assetsclass test visit classdef valid class spawned and alive selftests unit test plugin assets assetsplugin test fetch assets empty calls spawned and alive selftests unit test plugin assets assetsclass test visit fuctiondef valid class spawned and alive selftests unit test plugin assets assetsplugin test fetch assets sucess spawned and alive selftests unit test plugin assets assetsclass test visit fuctiondef invalid class spawned and alive selftests unit test plugin assets assetsclass test visit assign invalid class method spawned and alive selftests unit test plugin assets assetsclass test visit assign valid class method spawned and alive selftests unit test plugin assets assetsclass test visit classdef invalid class spawned and alive selftests unit test plugin assets assetsplugin test fetch assets sucess fail spawned and alive finished spawning tasks status server exiting due to all tasks finished tasks result summary pass avocado nrun should behave the same way as the run command when running python unittests from different directories
0
107,414
9,212,044,110
IssuesEvent
2019-03-09 20:34:06
bazo-blockchain/lazo
https://api.github.com/repos/bazo-blockchain/lazo
closed
Lexer: Test FixToken
testing
FixTokens should be tested. Run `./scripts/test.sh` and open the **coverage.html** file to see the actual test coverage.
1.0
Lexer: Test FixToken - FixTokens should be tested. Run `./scripts/test.sh` and open the **coverage.html** file to see the actual test coverage.
non_main
lexer test fixtoken fixtokens should be tested run scripts test sh and open the coverage html file to see the actual test coverage
0
1,487
6,425,057,157
IssuesEvent
2017-08-09 14:43:18
ansible/ansible
https://api.github.com/repos/ansible/ansible
closed
pacemaker module can't cleanup resources
affects_2.4 bug_report module needs_maintainer support:community
##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pacemaker_cluster module ##### ANSIBLE VERSION ansible 2.4.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/gsciorti/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/gsciorti/PycharmProjects/ansible/build/lib/ansible executable location = /usr/bin/ansible python version = 2.7.13 (default, May 10 2017, 20:04:28) [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)] ##### CONFIGURATION Default configuration ##### OS / ENVIRONMENT Ansible host: Fedora 25 Managed host: RHEL 7.2 running a pacemaker cluster with: - pacemaker-1.1.15-11.el7_3.2.x86_64 - pcs-0.9.152-10.el7_3.1.x86_64 - corosync-2.4.0-4.el7.x86_64 ##### SUMMARY The ansible module running with the parameter state=cleanup fails with an error without executing the command "pcs resource cleanup" ##### STEPS TO REPRODUCE ``` $ ansible r7node1 -b -m pacemaker_cluster -a 'state=cleanup' r7node1 | FAILED! => { "changed": false, "failed": true, "module_stderr": "Shared connection to r7node1 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_dhzoKL/ansible_module_pacemaker_cluster.py\", line 234, in <module>\r\n main()\r\n File \"/tmp/ansible_dhzoKL/ansible_module_pacemaker_cluster.py\", line 228, in main\r\n set_cluster(module, state, timeout, force)\r\n File \"/tmp/ansible_dhzoKL/ansible_module_pacemaker_cluster.py\", line 125, in set_cluster\r\n rc, out, err = module.run_command(cmd)\r\nUnboundLocalError: local variable 'cmd' referenced before assignment\r\n", "msg": "MODULE FAILURE", "rc": 0 } ``` ##### EXPECTED RESULTS The command "pcs resource cleanup" should be executed on the cluster node. Note: The function [clean_cluster](https://github.com/ansible/ansible/blob/0765ceb66dfad116ec0519a0f9272158da6600d0/lib/ansible/modules/clustering/pacemaker_cluster.py#L106) has been defined in the module pacemaker_cluster but this module call the function [set_cluster](https://github.com/ansible/ansible/blob/0765ceb66dfad116ec0519a0f9272158da6600d0/lib/ansible/modules/clustering/pacemaker_cluster.py#L106) that doesn't manage the cleanup operation ##### ACTUAL RESULTS ``` $ ansible r7node1 -b -m pacemaker_cluster -a 'state=cleanup' -v Using /etc/ansible/ansible.cfg as config file r7node1 | FAILED! => { "changed": false, "failed": true, "module_stderr": "Shared connection to r7node1 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_6AUAP8/ansible_module_pacemaker_cluster.py\", line 234, in <module>\r\n main()\r\n File \"/tmp/ansible_6AUAP8/ansible_module_pacemaker_cluster.py\", line 228, in main\r\n set_cluster(module, state, timeout, force)\r\n File \"/tmp/ansible_6AUAP8/ansible_module_pacemaker_cluster.py\", line 125, in set_cluster\r\n rc, out, err = module.run_command(cmd)\r\nUnboundLocalError: local variable 'cmd' referenced before assignment\r\n", "msg": "MODULE FAILURE", "rc": 0 } ```
True
pacemaker module can't cleanup resources - ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME pacemaker_cluster module ##### ANSIBLE VERSION ansible 2.4.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/gsciorti/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/gsciorti/PycharmProjects/ansible/build/lib/ansible executable location = /usr/bin/ansible python version = 2.7.13 (default, May 10 2017, 20:04:28) [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)] ##### CONFIGURATION Default configuration ##### OS / ENVIRONMENT Ansible host: Fedora 25 Managed host: RHEL 7.2 running a pacemaker cluster with: - pacemaker-1.1.15-11.el7_3.2.x86_64 - pcs-0.9.152-10.el7_3.1.x86_64 - corosync-2.4.0-4.el7.x86_64 ##### SUMMARY The ansible module running with the parameter state=cleanup fails with an error without executing the command "pcs resource cleanup" ##### STEPS TO REPRODUCE ``` $ ansible r7node1 -b -m pacemaker_cluster -a 'state=cleanup' r7node1 | FAILED! => { "changed": false, "failed": true, "module_stderr": "Shared connection to r7node1 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_dhzoKL/ansible_module_pacemaker_cluster.py\", line 234, in <module>\r\n main()\r\n File \"/tmp/ansible_dhzoKL/ansible_module_pacemaker_cluster.py\", line 228, in main\r\n set_cluster(module, state, timeout, force)\r\n File \"/tmp/ansible_dhzoKL/ansible_module_pacemaker_cluster.py\", line 125, in set_cluster\r\n rc, out, err = module.run_command(cmd)\r\nUnboundLocalError: local variable 'cmd' referenced before assignment\r\n", "msg": "MODULE FAILURE", "rc": 0 } ``` ##### EXPECTED RESULTS The command "pcs resource cleanup" should be executed on the cluster node. Note: The function [clean_cluster](https://github.com/ansible/ansible/blob/0765ceb66dfad116ec0519a0f9272158da6600d0/lib/ansible/modules/clustering/pacemaker_cluster.py#L106) has been defined in the module pacemaker_cluster but this module call the function [set_cluster](https://github.com/ansible/ansible/blob/0765ceb66dfad116ec0519a0f9272158da6600d0/lib/ansible/modules/clustering/pacemaker_cluster.py#L106) that doesn't manage the cleanup operation ##### ACTUAL RESULTS ``` $ ansible r7node1 -b -m pacemaker_cluster -a 'state=cleanup' -v Using /etc/ansible/ansible.cfg as config file r7node1 | FAILED! => { "changed": false, "failed": true, "module_stderr": "Shared connection to r7node1 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_6AUAP8/ansible_module_pacemaker_cluster.py\", line 234, in <module>\r\n main()\r\n File \"/tmp/ansible_6AUAP8/ansible_module_pacemaker_cluster.py\", line 228, in main\r\n set_cluster(module, state, timeout, force)\r\n File \"/tmp/ansible_6AUAP8/ansible_module_pacemaker_cluster.py\", line 125, in set_cluster\r\n rc, out, err = module.run_command(cmd)\r\nUnboundLocalError: local variable 'cmd' referenced before assignment\r\n", "msg": "MODULE FAILURE", "rc": 0 } ```
main
pacemaker module can t cleanup resources issue type bug report component name pacemaker cluster module ansible version ansible config file etc ansible ansible cfg configured module search path ansible python module location home gsciorti pycharmprojects ansible build lib ansible executable location usr bin ansible python version default may configuration default configuration os environment ansible host fedora managed host rhel running a pacemaker cluster with pacemaker pcs corosync summary the ansible module running with the parameter state cleanup fails with an error without executing the command pcs resource cleanup steps to reproduce ansible b m pacemaker cluster a state cleanup failed changed false failed true module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible dhzokl ansible module pacemaker cluster py line in r n main r n file tmp ansible dhzokl ansible module pacemaker cluster py line in main r n set cluster module state timeout force r n file tmp ansible dhzokl ansible module pacemaker cluster py line in set cluster r n rc out err module run command cmd r nunboundlocalerror local variable cmd referenced before assignment r n msg module failure rc expected results the command pcs resource cleanup should be executed on the cluster node note the function has been defined in the module pacemaker cluster but this module call the function that doesn t manage the cleanup operation actual results ansible b m pacemaker cluster a state cleanup v using etc ansible ansible cfg as config file failed changed false failed true module stderr shared connection to closed r n module stdout traceback most recent call last r n file tmp ansible ansible module pacemaker cluster py line in r n main r n file tmp ansible ansible module pacemaker cluster py line in main r n set cluster module state timeout force r n file tmp ansible ansible module pacemaker cluster py line in set cluster r n rc out err module run command cmd r nunboundlocalerror local variable cmd referenced before assignment r n msg module failure rc
1
418
3,489,613,447
IssuesEvent
2016-01-04 01:01:08
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
closed
Use bundle ID to generate zap stanzas
awaiting maintainer feedback cask enhancement
Compare zap stanzas for any two apps, e.g. Sublime Text 2: ``` '~/Library/Application Support/Sublime Text 2', '~/Library/Preferences/com.sublimetext.2.plist', '~/Library/Caches/com.sublimetext.2', '~/Library/Saved Application State/com.sublimetext.2.savedState' ``` and Atom: ``` '~/.atom', '~/Library/Application Support/ShipIt_stderr.log', '~/Library/Application Support/Atom', '~/Library/Application Support/ShipIt_stdout.log', '~/Library/Application Support/com.github.atom.ShipIt', '~/Library/Caches/com.github.atom', '~/Library/Preferences/com.github.atom.plist' ``` We can see that paths like `'~/Library/Preferences/com.github.atom.plist'` are formed by combining a well-known directory with a bundle identifier (of course, there can be more than one per package). Storing bundle IDs would allow us to automatically generate partial zap stanzas. uninstall stanza already allows storing bundle ids, but for a different purpose — perhaps this can be unified? The list of paths I've gathered so far: ``` ~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/#{bundleID}.sfl ~/Library/Cookies/#{bundleID}.binarycookies ~/Library/Caches/#{bundleID}/ ~/Library/Preferences/#{bundleID}.plist ~/Library/Preferences/#{bundleID}.LSSharedFileList.plist ~/Library/Saved Application State/#{bundleId}.savedState/ ```
True
Use bundle ID to generate zap stanzas - Compare zap stanzas for any two apps, e.g. Sublime Text 2: ``` '~/Library/Application Support/Sublime Text 2', '~/Library/Preferences/com.sublimetext.2.plist', '~/Library/Caches/com.sublimetext.2', '~/Library/Saved Application State/com.sublimetext.2.savedState' ``` and Atom: ``` '~/.atom', '~/Library/Application Support/ShipIt_stderr.log', '~/Library/Application Support/Atom', '~/Library/Application Support/ShipIt_stdout.log', '~/Library/Application Support/com.github.atom.ShipIt', '~/Library/Caches/com.github.atom', '~/Library/Preferences/com.github.atom.plist' ``` We can see that paths like `'~/Library/Preferences/com.github.atom.plist'` are formed by combining a well-known directory with a bundle identifier (of course, there can be more than one per package). Storing bundle IDs would allow us to automatically generate partial zap stanzas. uninstall stanza already allows storing bundle ids, but for a different purpose — perhaps this can be unified? The list of paths I've gathered so far: ``` ~/Library/Application Support/com.apple.sharedfilelist/com.apple.LSSharedFileList.ApplicationRecentDocuments/#{bundleID}.sfl ~/Library/Cookies/#{bundleID}.binarycookies ~/Library/Caches/#{bundleID}/ ~/Library/Preferences/#{bundleID}.plist ~/Library/Preferences/#{bundleID}.LSSharedFileList.plist ~/Library/Saved Application State/#{bundleId}.savedState/ ```
main
use bundle id to generate zap stanzas compare zap stanzas for any two apps e g sublime text library application support sublime text library preferences com sublimetext plist library caches com sublimetext library saved application state com sublimetext savedstate and atom atom library application support shipit stderr log library application support atom library application support shipit stdout log library application support com github atom shipit library caches com github atom library preferences com github atom plist we can see that paths like library preferences com github atom plist are formed by combining a well known directory with a bundle identifier of course there can be more than one per package storing bundle ids would allow us to automatically generate partial zap stanzas uninstall stanza already allows storing bundle ids but for a different purpose — perhaps this can be unified the list of paths i ve gathered so far library application support com apple sharedfilelist com apple lssharedfilelist applicationrecentdocuments bundleid sfl library cookies bundleid binarycookies library caches bundleid library preferences bundleid plist library preferences bundleid lssharedfilelist plist library saved application state bundleid savedstate
1
756
4,351,919,971
IssuesEvent
2016-08-01 02:56:24
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
glance_image module sets HAS_GLANCECLIENT, but checks for HAVE_GLANCECLIENT
bug_report cloud waiting_on_maintainer
On line 137 glance_image module does: ``` try: import glanceclient HAS_GLANCECLIENT = True except ImportError: HAS_GLANCECLIENT = False try: from keystoneclient.v2_0 import client as ksclient HAS_KEYSTONECLIENT = True except ImportError: HAS_KEYSTONECLIENT= False ``` then later on line 250: if not HAVE_GLANCECLIENT: module.fail_json(msg='python-glanceclient is required for this module') if not HAVE_KEYSTONECLIENT: module.fail_json(msg='python-keystoneclient is required for this module') Probibly should be setting the same variable it's checking for?
True
glance_image module sets HAS_GLANCECLIENT, but checks for HAVE_GLANCECLIENT - On line 137 glance_image module does: ``` try: import glanceclient HAS_GLANCECLIENT = True except ImportError: HAS_GLANCECLIENT = False try: from keystoneclient.v2_0 import client as ksclient HAS_KEYSTONECLIENT = True except ImportError: HAS_KEYSTONECLIENT= False ``` then later on line 250: if not HAVE_GLANCECLIENT: module.fail_json(msg='python-glanceclient is required for this module') if not HAVE_KEYSTONECLIENT: module.fail_json(msg='python-keystoneclient is required for this module') Probibly should be setting the same variable it's checking for?
main
glance image module sets has glanceclient but checks for have glanceclient on line glance image module does try import glanceclient has glanceclient true except importerror has glanceclient false try from keystoneclient import client as ksclient has keystoneclient true except importerror has keystoneclient false then later on line if not have glanceclient module fail json msg python glanceclient is required for this module if not have keystoneclient module fail json msg python keystoneclient is required for this module probibly should be setting the same variable it s checking for
1
1,899
6,577,549,829
IssuesEvent
2017-09-12 01:41:43
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
Ability to specify multiple AZ's when creating multiple cache nodes with elasticache module
affects_2.0 aws cloud feature_idea waiting_on_maintainer
##### Issue Type: - Feature Idea ##### Plugin Name: module: elasticache ##### Ansible Version: ansible 2.0.1.0 ##### Ansible Configuration: [defaults] force_color = 1 hostfile = /etc/ansible/hosts library = /usr/share/ansible nocows = 1 ##### Environment: N/A ##### Summary: The Ansible elasticache module does not support spreading cache nodes across different availability zones. ##### Steps To Reproduce: The "zone" parameter for the elasticache module should support specifying a list of AZ's when multiple cache nodes are being created. ##### Expected Results: Each cache node will be distributed among the different AZ's listed for the "zone" parameter. ##### Actual Results: Currently the elasticache module allows specifying one AZ and all cache nodes are put into the same AZ which decreases high availability.
True
Ability to specify multiple AZ's when creating multiple cache nodes with elasticache module - ##### Issue Type: - Feature Idea ##### Plugin Name: module: elasticache ##### Ansible Version: ansible 2.0.1.0 ##### Ansible Configuration: [defaults] force_color = 1 hostfile = /etc/ansible/hosts library = /usr/share/ansible nocows = 1 ##### Environment: N/A ##### Summary: The Ansible elasticache module does not support spreading cache nodes across different availability zones. ##### Steps To Reproduce: The "zone" parameter for the elasticache module should support specifying a list of AZ's when multiple cache nodes are being created. ##### Expected Results: Each cache node will be distributed among the different AZ's listed for the "zone" parameter. ##### Actual Results: Currently the elasticache module allows specifying one AZ and all cache nodes are put into the same AZ which decreases high availability.
main
ability to specify multiple az s when creating multiple cache nodes with elasticache module issue type feature idea plugin name module elasticache ansible version ansible ansible configuration force color hostfile etc ansible hosts library usr share ansible nocows environment n a summary the ansible elasticache module does not support spreading cache nodes across different availability zones steps to reproduce the zone parameter for the elasticache module should support specifying a list of az s when multiple cache nodes are being created expected results each cache node will be distributed among the different az s listed for the zone parameter actual results currently the elasticache module allows specifying one az and all cache nodes are put into the same az which decreases high availability
1
2,141
7,363,076,017
IssuesEvent
2018-03-12 00:57:13
caskroom/homebrew-cask
https://api.github.com/repos/caskroom/homebrew-cask
closed
[brew cask upgrade] Error: undefined method `<=' for nil:NilClass
awaiting maintainer feedback
#### General troubleshooting steps - [x] I have retried my command with `--force` and the issue is still present. - [x] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue. - [x] None of the templates was appropriate for my issue, or I’m not sure. - [x] I ran `brew update-reset && brew update` and retried my command. - [x] I ran `brew doctor`, fixed as many issues as possible and retried my command. - [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue Seems like there's a missing Nil check, or something to that effect. #### Output of your command with `--verbose --debug` ``` ==> Upgrading 6 outdated packages, with result: anki 2.0.49, java 9.0.4,11:c2514751926b4512b076cc82f959763f, multimc 0.6.1, universal-media-server 6.7.4, vienna 3.2.1, virtualbox 5.2.8,121009 ==> Started upgrade process for Cask anki Error: undefined method `<=' for nil:NilClass Did you mean? <=> Follow the instructions here: https://github.com/caskroom/homebrew-cask#reporting-bugs /usr/local/Caskroom/anki/.metadata/2.0.36/20160613211401.385/Casks/anki.rb:2:in `block in load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask.rb:23:in `instance_eval' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask.rb:23:in `initialize' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:29:in `new' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:29:in `cask' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:65:in `cask' /usr/local/Homebrew/Library/Homebrew/compat/hbc/cask_loader.rb:10:in `cask' /usr/local/Caskroom/anki/.metadata/2.0.36/20160613211401.385/Casks/anki.rb:1:in `load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:55:in `instance_eval' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:55:in `load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:168:in `load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/upgrade.rb:35:in `block in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/upgrade.rb:29:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/upgrade.rb:29:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:98:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:168:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:100:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:179:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:179:in `rescue in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:156:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:100:in `<main>' ``` #### Output of `brew cask doctor` ``` ==> Homebrew-Cask Version Homebrew-Cask 1.5.7 caskroom/homebrew-cask (git revision 7fd05; last commit 2018-03-03) ==> macOS 10.13.3 ==> SIP Enabled ==> Java 1.8.0_66, 1.8.0_25, 1.8.0_11, 1.8.0_05, 1.7.0_67 ==> Homebrew-Cask Install Location <NONE> ==> Homebrew-Cask Staging Location /usr/local/Caskroom ==> Homebrew-Cask Cached Downloads ~/Library/Caches/Homebrew/Cask ==> Homebrew-Cask Taps: /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3925 casks) /usr/local/Homebrew/Library/Taps/caskroom/homebrew-fonts (1160 casks) /usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (172 casks) ==> Contents of $LOAD_PATH /usr/local/Homebrew/Library/Homebrew/cask/lib /usr/local/Homebrew/Library/Homebrew /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/gems/2.3.0/gems/did_you_mean-1.0.0/lib /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/site_ruby/2.3.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/site_ruby/2.3.0/x86_64-darwin9.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/site_ruby/2.3.0/universal-darwin9.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/site_ruby /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/vendor_ruby/2.3.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/vendor_ruby/2.3.0/x86_64-darwin9.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/vendor_ruby/2.3.0/universal-darwin9.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/vendor_ruby /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/2.3.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/2.3.0/x86_64-darwin9.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/2.3.0/universal-darwin9.0 ==> Environment Variables LC_ALL="en_US.UTF-8" PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Homebrew/Library/Homebrew/shims/scm" SHELL="/bin/bash" ```
True
[brew cask upgrade] Error: undefined method `<=' for nil:NilClass - #### General troubleshooting steps - [x] I have retried my command with `--force` and the issue is still present. - [x] I have checked the instructions for [reporting bugs](https://github.com/caskroom/homebrew-cask#reporting-bugs) (or [making requests](https://github.com/caskroom/homebrew-cask#requests)) before opening the issue. - [x] None of the templates was appropriate for my issue, or I’m not sure. - [x] I ran `brew update-reset && brew update` and retried my command. - [x] I ran `brew doctor`, fixed as many issues as possible and retried my command. - [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/caskroom/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md). #### Description of issue Seems like there's a missing Nil check, or something to that effect. #### Output of your command with `--verbose --debug` ``` ==> Upgrading 6 outdated packages, with result: anki 2.0.49, java 9.0.4,11:c2514751926b4512b076cc82f959763f, multimc 0.6.1, universal-media-server 6.7.4, vienna 3.2.1, virtualbox 5.2.8,121009 ==> Started upgrade process for Cask anki Error: undefined method `<=' for nil:NilClass Did you mean? <=> Follow the instructions here: https://github.com/caskroom/homebrew-cask#reporting-bugs /usr/local/Caskroom/anki/.metadata/2.0.36/20160613211401.385/Casks/anki.rb:2:in `block in load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask.rb:23:in `instance_eval' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask.rb:23:in `initialize' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:29:in `new' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:29:in `cask' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:65:in `cask' /usr/local/Homebrew/Library/Homebrew/compat/hbc/cask_loader.rb:10:in `cask' /usr/local/Caskroom/anki/.metadata/2.0.36/20160613211401.385/Casks/anki.rb:1:in `load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:55:in `instance_eval' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:55:in `load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cask_loader.rb:168:in `load' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/upgrade.rb:35:in `block in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/upgrade.rb:29:in `each' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/upgrade.rb:29:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli/abstract_command.rb:35:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:98:in `run_command' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:168:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:100:in `<main>' Error: Kernel.exit /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:179:in `exit' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:179:in `rescue in run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:156:in `run' /usr/local/Homebrew/Library/Homebrew/cask/lib/hbc/cli.rb:132:in `run' /usr/local/Homebrew/Library/Homebrew/cmd/cask.rb:8:in `cask' /usr/local/Homebrew/Library/Homebrew/brew.rb:100:in `<main>' ``` #### Output of `brew cask doctor` ``` ==> Homebrew-Cask Version Homebrew-Cask 1.5.7 caskroom/homebrew-cask (git revision 7fd05; last commit 2018-03-03) ==> macOS 10.13.3 ==> SIP Enabled ==> Java 1.8.0_66, 1.8.0_25, 1.8.0_11, 1.8.0_05, 1.7.0_67 ==> Homebrew-Cask Install Location <NONE> ==> Homebrew-Cask Staging Location /usr/local/Caskroom ==> Homebrew-Cask Cached Downloads ~/Library/Caches/Homebrew/Cask ==> Homebrew-Cask Taps: /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask (3925 casks) /usr/local/Homebrew/Library/Taps/caskroom/homebrew-fonts (1160 casks) /usr/local/Homebrew/Library/Taps/caskroom/homebrew-versions (172 casks) ==> Contents of $LOAD_PATH /usr/local/Homebrew/Library/Homebrew/cask/lib /usr/local/Homebrew/Library/Homebrew /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/gems/2.3.0/gems/did_you_mean-1.0.0/lib /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/site_ruby/2.3.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/site_ruby/2.3.0/x86_64-darwin9.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/site_ruby/2.3.0/universal-darwin9.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/site_ruby /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/vendor_ruby/2.3.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/vendor_ruby/2.3.0/x86_64-darwin9.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/vendor_ruby/2.3.0/universal-darwin9.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/vendor_ruby /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/2.3.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/2.3.0/x86_64-darwin9.0 /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.3/lib/ruby/2.3.0/universal-darwin9.0 ==> Environment Variables LC_ALL="en_US.UTF-8" PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/Homebrew/Library/Homebrew/shims/scm" SHELL="/bin/bash" ```
main
error undefined method for nil nilclass general troubleshooting steps i have retried my command with force and the issue is still present i have checked the instructions for or before opening the issue none of the templates was appropriate for my issue or i’m not sure i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i understand that description of issue seems like there s a missing nil check or something to that effect output of your command with verbose debug upgrading outdated packages with result anki java multimc universal media server vienna virtualbox started upgrade process for cask anki error undefined method for nil nilclass did you mean follow the instructions here usr local caskroom anki metadata casks anki rb in block in load usr local homebrew library homebrew cask lib hbc cask rb in instance eval usr local homebrew library homebrew cask lib hbc cask rb in initialize usr local homebrew library homebrew cask lib hbc cask loader rb in new usr local homebrew library homebrew cask lib hbc cask loader rb in cask usr local homebrew library homebrew cask lib hbc cask loader rb in cask usr local homebrew library homebrew compat hbc cask loader rb in cask usr local caskroom anki metadata casks anki rb in load usr local homebrew library homebrew cask lib hbc cask loader rb in instance eval usr local homebrew library homebrew cask lib hbc cask loader rb in load usr local homebrew library homebrew cask lib hbc cask loader rb in load usr local homebrew library homebrew cask lib hbc cli upgrade rb in block in run usr local homebrew library homebrew cask lib hbc cli upgrade rb in each usr local homebrew library homebrew cask lib hbc cli upgrade rb in run usr local homebrew library homebrew cask lib hbc cli abstract command rb in run usr local homebrew library homebrew cask lib hbc cli rb in run command usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in error kernel exit usr local homebrew library homebrew cask lib hbc cli rb in exit usr local homebrew library homebrew cask lib hbc cli rb in rescue in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cask lib hbc cli rb in run usr local homebrew library homebrew cmd cask rb in cask usr local homebrew library homebrew brew rb in output of brew cask doctor homebrew cask version homebrew cask caskroom homebrew cask git revision last commit macos sip enabled java homebrew cask install location homebrew cask staging location usr local caskroom homebrew cask cached downloads library caches homebrew cask homebrew cask taps usr local homebrew library taps caskroom homebrew cask casks usr local homebrew library taps caskroom homebrew fonts casks usr local homebrew library taps caskroom homebrew versions casks contents of load path usr local homebrew library homebrew cask lib usr local homebrew library homebrew usr local homebrew library homebrew vendor portable ruby lib ruby gems gems did you mean lib usr local homebrew library homebrew vendor portable ruby lib ruby site ruby usr local homebrew library homebrew vendor portable ruby lib ruby site ruby usr local homebrew library homebrew vendor portable ruby lib ruby site ruby universal usr local homebrew library homebrew vendor portable ruby lib ruby site ruby usr local homebrew library homebrew vendor portable ruby lib ruby vendor ruby usr local homebrew library homebrew vendor portable ruby lib ruby vendor ruby usr local homebrew library homebrew vendor portable ruby lib ruby vendor ruby universal usr local homebrew library homebrew vendor portable ruby lib ruby vendor ruby usr local homebrew library homebrew vendor portable ruby lib ruby usr local homebrew library homebrew vendor portable ruby lib ruby usr local homebrew library homebrew vendor portable ruby lib ruby universal environment variables lc all en us utf path usr bin bin usr sbin sbin usr local homebrew library homebrew shims scm shell bin bash
1
285,362
8,757,854,566
IssuesEvent
2018-12-14 22:59:36
danielcaldas/react-d3-graph
https://api.github.com/repos/danielcaldas/react-d3-graph
closed
Display Name of Edge on Graph
duplicate feature request in progress priority normal wontfix
I was wondering if there was a way to display the label (or name attribute) of an edge on the graph so that it would be easy to see what the relationship between two nodes is? Read through the documentation but wasn't able to find anything, apologies if i missed it.
1.0
Display Name of Edge on Graph - I was wondering if there was a way to display the label (or name attribute) of an edge on the graph so that it would be easy to see what the relationship between two nodes is? Read through the documentation but wasn't able to find anything, apologies if i missed it.
non_main
display name of edge on graph i was wondering if there was a way to display the label or name attribute of an edge on the graph so that it would be easy to see what the relationship between two nodes is read through the documentation but wasn t able to find anything apologies if i missed it
0
1,114
4,988,947,314
IssuesEvent
2016-12-08 10:09:36
ansible/ansible-modules-core
https://api.github.com/repos/ansible/ansible-modules-core
closed
find_module: allow regex in paths
affects_2.1 feature_idea waiting_on_maintainer
Issue Type: Feature Idea Component Name: find module Ansible Version: ansible 2.1.0 (devel 210cf06d9a) last updated 2016/01/04 11:21:26 (GMT +200) Environment: Ubuntu 15.04 Example: To replace a simple ``` find /home/*/foo/bar/ ``` with the new "find" module you have to use a lot of nested "find" calls (one for each sub folder level). It would be great if this could be done with one simple call. At the moment this is the result: ``` "msg": "/home/*/foo/bar/ was skipped as it does not seem to be a valid directory or it cannot be accessed\n" ```
True
find_module: allow regex in paths - Issue Type: Feature Idea Component Name: find module Ansible Version: ansible 2.1.0 (devel 210cf06d9a) last updated 2016/01/04 11:21:26 (GMT +200) Environment: Ubuntu 15.04 Example: To replace a simple ``` find /home/*/foo/bar/ ``` with the new "find" module you have to use a lot of nested "find" calls (one for each sub folder level). It would be great if this could be done with one simple call. At the moment this is the result: ``` "msg": "/home/*/foo/bar/ was skipped as it does not seem to be a valid directory or it cannot be accessed\n" ```
main
find module allow regex in paths issue type feature idea component name find module ansible version ansible devel last updated gmt environment ubuntu example to replace a simple find home foo bar with the new find module you have to use a lot of nested find calls one for each sub folder level it would be great if this could be done with one simple call at the moment this is the result msg home foo bar was skipped as it does not seem to be a valid directory or it cannot be accessed n
1
334,426
10,141,717,373
IssuesEvent
2019-08-03 16:44:28
mit-cml/appinventor-sources
https://api.github.com/repos/mit-cml/appinventor-sources
opened
Bitwise operators not appearing in nb178
affects: master issue: noted for future Work priority: high regression
The bitwise operator blocks no longer appear due to the way BlockSubset was implemented. The additional blocks need to be added to drawer.js
1.0
Bitwise operators not appearing in nb178 - The bitwise operator blocks no longer appear due to the way BlockSubset was implemented. The additional blocks need to be added to drawer.js
non_main
bitwise operators not appearing in the bitwise operator blocks no longer appear due to the way blocksubset was implemented the additional blocks need to be added to drawer js
0