Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4,886 | 25,072,177,024 | IssuesEvent | 2022-11-07 13:02:13 | chocolatey-community/chocolatey-package-requests | https://api.github.com/repos/chocolatey-community/chocolatey-package-requests | closed | RFM - aaclr | Status: Available For Maintainer(s) | ## I DON'T Want To Become The Maintainer
- [x] I have followed the Package Triage Process and I do NOT want to become maintainer of the package;
- [x] There is no existing open maintainer request for this package;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://chocolatey.org/packages/aaclr
Package source URL: https://github.com/dtgm/chocolatey-packages
Date the maintainer was contacted (in YYYY-MM-DD): 2021-12-13
How the maintainer was contacted: Email
| True | RFM - aaclr - ## I DON'T Want To Become The Maintainer
- [x] I have followed the Package Triage Process and I do NOT want to become maintainer of the package;
- [x] There is no existing open maintainer request for this package;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://chocolatey.org/packages/aaclr
Package source URL: https://github.com/dtgm/chocolatey-packages
Date the maintainer was contacted (in YYYY-MM-DD): 2021-12-13
How the maintainer was contacted: Email
| main | rfm aaclr i don t want to become the maintainer i have followed the package triage process and i do not want to become maintainer of the package there is no existing open maintainer request for this package checklist issue title starts with rfm existing package details package url package source url date the maintainer was contacted in yyyy mm dd how the maintainer was contacted email | 1 |
194,427 | 22,261,983,202 | IssuesEvent | 2022-06-10 01:56:30 | Trinadh465/device_renesas_kernel_AOSP10_r33 | https://api.github.com/repos/Trinadh465/device_renesas_kernel_AOSP10_r33 | reopened | CVE-2021-20292 (Medium) detected in linuxlinux-4.19.239, linuxlinux-4.19.239 | security vulnerability | ## CVE-2021-20292 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.239</b>, <b>linuxlinux-4.19.239</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There is a flaw reported in the Linux kernel in versions before 5.9 in drivers/gpu/drm/nouveau/nouveau_sgdma.c in nouveau_sgdma_create_ttm in Nouveau DRM subsystem. The issue results from the lack of validating the existence of an object prior to performing operations on the object. An attacker with a local account with a root privilege, can leverage this vulnerability to escalate privileges and execute code in the context of the kernel.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20292>CVE-2021-20292</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-20292">https://www.linuxkernelcves.com/cves/CVE-2021-20292</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: v4.19.140, v5.4.59, v5.7.16, v5.8.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-20292 (Medium) detected in linuxlinux-4.19.239, linuxlinux-4.19.239 - ## CVE-2021-20292 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.239</b>, <b>linuxlinux-4.19.239</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
There is a flaw reported in the Linux kernel in versions before 5.9 in drivers/gpu/drm/nouveau/nouveau_sgdma.c in nouveau_sgdma_create_ttm in Nouveau DRM subsystem. The issue results from the lack of validating the existence of an object prior to performing operations on the object. An attacker with a local account with a root privilege, can leverage this vulnerability to escalate privileges and execute code in the context of the kernel.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-20292>CVE-2021-20292</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-20292">https://www.linuxkernelcves.com/cves/CVE-2021-20292</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: v4.19.140, v5.4.59, v5.7.16, v5.8.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in linuxlinux linuxlinux cve medium severity vulnerability vulnerable libraries linuxlinux linuxlinux vulnerability details there is a flaw reported in the linux kernel in versions before in drivers gpu drm nouveau nouveau sgdma c in nouveau sgdma create ttm in nouveau drm subsystem the issue results from the lack of validating the existence of an object prior to performing operations on the object an attacker with a local account with a root privilege can leverage this vulnerability to escalate privileges and execute code in the context of the kernel publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
827,297 | 31,765,057,160 | IssuesEvent | 2023-09-12 08:16:30 | filamentphp/filament | https://api.github.com/repos/filamentphp/filament | opened | Test | bug unconfirmed low priority | ### Package
filament/filament
### Package Version
vNothing
### Laravel Version
vNothing
### Livewire Version
vNothing
### PHP Version
vNothing
### Problem description
N/A
### Expected behavior
N/A
### Steps to reproduce
N/A
### Reproduction repository
N/A
### Relevant log output
```shell
N/A
```
| 1.0 | Test - ### Package
filament/filament
### Package Version
vNothing
### Laravel Version
vNothing
### Livewire Version
vNothing
### PHP Version
vNothing
### Problem description
N/A
### Expected behavior
N/A
### Steps to reproduce
N/A
### Reproduction repository
N/A
### Relevant log output
```shell
N/A
```
| non_main | test package filament filament package version vnothing laravel version vnothing livewire version vnothing php version vnothing problem description n a expected behavior n a steps to reproduce n a reproduction repository n a relevant log output shell n a | 0 |
1,630 | 6,572,656,330 | IssuesEvent | 2017-09-11 04:08:02 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | .npm cache gets populated at /root with ownership of sudo user | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
npm module
##### ANSIBLE VERSION
```
ansible 2.1.1.0
```
##### CONFIGURATION
```
vars:
node_branch: '4.x'
node_version: '4.6.2-1nodesource1~trusty1'
tasks:
- name: node | Register NodeSource signing key
become: true
apt_key: url=https://deb.nodesource.com/gpgkey/nodesource.gpg.key state=present
- name: node | Add NodeSource repository
become: true
apt_repository: repo='{{item}}' state=present
with_items:
- deb https://deb.nodesource.com/node_{{node_branch}} trusty main
- deb-src https://deb.nodesource.com/node_{{node_branch}} trusty main
- name: node | Install Node.js
become: true
apt: name='nodejs={{node_version}}' update_cache=yes state=present
- name: node | Install pm2
become: true
npm: name=pm2 global=yes version=2.1.4
```
##### OS / ENVIRONMENT
```
nodejs v4.6.2 LTS shipping with npm v2.5.11
Mac OS X 10.11.6 (local)
Ubuntu 14.04 LTS (provisioning target)
```
##### SUMMARY
If a module is installed globally the ownership of npm's cache (the `.npm` folder) after provisioning is not correct which causes subsequent errors and strange behaviors due to wrong permissions on that folder.
After provisioning with Ansible's npm module the npm cache is populated at `/root/.npm` but with ownership `deploy:deploy` where `deploy` is the user that is used for SSH-login on the remote machine.
As global installation of npm modules requires root privileges `become: true` is set and causes invocation via `sudo`. I found out that in these cases Ansible invokes `npm` via `sudo -H ...` which causes `$HOME` to be changed to `/root`. This might cause population of `.npm` folder at `$HOME/.npm` == `/root/.npm` but with ownership of the user defined by `$SUDO_USER` which is still `deploy`.
Unfortunately there is no proper documentation on how npm handles installs via sudo, so this is only a guess from my side.
Nevertheless this can also be reproduced manually via `deploy:~$ sudo -H npm install -g pm2` which causes the same behavior. If the flag `-H` is omitted everything is fine as the `.npm` folder gets populated at `/home/deploy/.npm` with ownership `deploy:deploy` but I do not see any configuration option for Ansible to influence the parameters for the sudo-invocation.
##### STEPS TO REPRODUCE
see above
##### EXPECTED RESULTS
npm's cache should be populated at `/home/deploy/.npm` with ownership `deploy:deploy`.
##### ACTUAL RESULTS
npm's cache is populated at `/root/.npm` with ownership `deploy:deploy`.
```
root:~# ls -la
-rw------- 1 root root 6893 Nov 11 07:26 .bash_history
-rw-r--r-- 1 root root 3106 Feb 20 2014 .bashrc
drwxr-xr-x 3 deploy deploy 4096 Nov 10 21:01 .npm/
-rw-r--r-- 1 root root 141 Nov 7 19:55 .profile
drwx------ 2 root root 4096 Jul 29 10:00 .ssh/
-rw------- 1 root root 3787 Nov 9 20:42 .viminfo
```
| True | .npm cache gets populated at /root with ownership of sudo user - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
npm module
##### ANSIBLE VERSION
```
ansible 2.1.1.0
```
##### CONFIGURATION
```
vars:
node_branch: '4.x'
node_version: '4.6.2-1nodesource1~trusty1'
tasks:
- name: node | Register NodeSource signing key
become: true
apt_key: url=https://deb.nodesource.com/gpgkey/nodesource.gpg.key state=present
- name: node | Add NodeSource repository
become: true
apt_repository: repo='{{item}}' state=present
with_items:
- deb https://deb.nodesource.com/node_{{node_branch}} trusty main
- deb-src https://deb.nodesource.com/node_{{node_branch}} trusty main
- name: node | Install Node.js
become: true
apt: name='nodejs={{node_version}}' update_cache=yes state=present
- name: node | Install pm2
become: true
npm: name=pm2 global=yes version=2.1.4
```
##### OS / ENVIRONMENT
```
nodejs v4.6.2 LTS shipping with npm v2.5.11
Mac OS X 10.11.6 (local)
Ubuntu 14.04 LTS (provisioning target)
```
##### SUMMARY
If a module is installed globally the ownership of npm's cache (the `.npm` folder) after provisioning is not correct which causes subsequent errors and strange behaviors due to wrong permissions on that folder.
After provisioning with Ansible's npm module the npm cache is populated at `/root/.npm` but with ownership `deploy:deploy` where `deploy` is the user that is used for SSH-login on the remote machine.
As global installation of npm modules requires root privileges `become: true` is set and causes invocation via `sudo`. I found out that in these cases Ansible invokes `npm` via `sudo -H ...` which causes `$HOME` to be changed to `/root`. This might cause population of `.npm` folder at `$HOME/.npm` == `/root/.npm` but with ownership of the user defined by `$SUDO_USER` which is still `deploy`.
Unfortunately there is no proper documentation on how npm handles installs via sudo, so this is only a guess from my side.
Nevertheless this can also be reproduced manually via `deploy:~$ sudo -H npm install -g pm2` which causes the same behavior. If the flag `-H` is omitted everything is fine as the `.npm` folder gets populated at `/home/deploy/.npm` with ownership `deploy:deploy` but I do not see any configuration option for Ansible to influence the parameters for the sudo-invocation.
##### STEPS TO REPRODUCE
see above
##### EXPECTED RESULTS
npm's cache should be populated at `/home/deploy/.npm` with ownership `deploy:deploy`.
##### ACTUAL RESULTS
npm's cache is populated at `/root/.npm` with ownership `deploy:deploy`.
```
root:~# ls -la
-rw------- 1 root root 6893 Nov 11 07:26 .bash_history
-rw-r--r-- 1 root root 3106 Feb 20 2014 .bashrc
drwxr-xr-x 3 deploy deploy 4096 Nov 10 21:01 .npm/
-rw-r--r-- 1 root root 141 Nov 7 19:55 .profile
drwx------ 2 root root 4096 Jul 29 10:00 .ssh/
-rw------- 1 root root 3787 Nov 9 20:42 .viminfo
```
| main | npm cache gets populated at root with ownership of sudo user issue type bug report component name npm module ansible version ansible configuration vars node branch x node version tasks name node register nodesource signing key become true apt key url state present name node add nodesource repository become true apt repository repo item state present with items deb trusty main deb src trusty main name node install node js become true apt name nodejs node version update cache yes state present name node install become true npm name global yes version os environment nodejs lts shipping with npm mac os x local ubuntu lts provisioning target summary if a module is installed globally the ownership of npm s cache the npm folder after provisioning is not correct which causes subsequent errors and strange behaviors due to wrong permissions on that folder after provisioning with ansible s npm module the npm cache is populated at root npm but with ownership deploy deploy where deploy is the user that is used for ssh login on the remote machine as global installation of npm modules requires root privileges become true is set and causes invocation via sudo i found out that in these cases ansible invokes npm via sudo h which causes home to be changed to root this might cause population of npm folder at home npm root npm but with ownership of the user defined by sudo user which is still deploy unfortunately there is no proper documentation on how npm handles installs via sudo so this is only a guess from my side nevertheless this can also be reproduced manually via deploy sudo h npm install g which causes the same behavior if the flag h is omitted everything is fine as the npm folder gets populated at home deploy npm with ownership deploy deploy but i do not see any configuration option for ansible to influence the parameters for the sudo invocation steps to reproduce see above expected results npm s cache should be populated at home deploy npm with ownership deploy deploy actual results npm s cache is populated at root npm with ownership deploy deploy root ls la rw root root nov bash history rw r r root root feb bashrc drwxr xr x deploy deploy nov npm rw r r root root nov profile drwx root root jul ssh rw root root nov viminfo | 1 |
286,689 | 31,720,941,630 | IssuesEvent | 2023-09-10 12:07:28 | TomasiDeveloping/ExpensesTracker | https://api.github.com/repos/TomasiDeveloping/ExpensesTracker | closed | sweetalert2-11.7.3.tgz: 1 vulnerabilities (highest severity is: 5.3) | Mend: dependency security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sweetalert2-11.7.3.tgz</b></p></summary>
<p></p>
<p>Library home page: <a href="https://registry.npmjs.org/sweetalert2/-/sweetalert2-11.7.3.tgz">https://registry.npmjs.org/sweetalert2/-/sweetalert2-11.7.3.tgz</a></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/TomasiDeveloping/ExpensesTracker/commit/7671290d086466bd3fe985b9968e287ff7d69ca0">7671290d086466bd3fe985b9968e287ff7d69ca0</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (sweetalert2 version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [WS-2023-0250](https://github.com/advisories/GHSA-mrr8-v49w-3333) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | sweetalert2-11.7.3.tgz | Direct | N/A | ❌ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> WS-2023-0250</summary>
### Vulnerable Library - <b>sweetalert2-11.7.3.tgz</b></p>
<p></p>
<p>Library home page: <a href="https://registry.npmjs.org/sweetalert2/-/sweetalert2-11.7.3.tgz">https://registry.npmjs.org/sweetalert2/-/sweetalert2-11.7.3.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **sweetalert2-11.7.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TomasiDeveloping/ExpensesTracker/commit/7671290d086466bd3fe985b9968e287ff7d69ca0">7671290d086466bd3fe985b9968e287ff7d69ca0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
sweetalert2 versions 11.6.14 and above have potentially undesirable behavior. The package outputs audio and/or video messages that do not pertain to the functionality of the package when run on specific tlds. This functionality is documented on the project's readme
<p>Publish Date: 2023-07-10
<p>URL: <a href=https://github.com/advisories/GHSA-mrr8-v49w-3333>WS-2023-0250</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | True | sweetalert2-11.7.3.tgz: 1 vulnerabilities (highest severity is: 5.3) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sweetalert2-11.7.3.tgz</b></p></summary>
<p></p>
<p>Library home page: <a href="https://registry.npmjs.org/sweetalert2/-/sweetalert2-11.7.3.tgz">https://registry.npmjs.org/sweetalert2/-/sweetalert2-11.7.3.tgz</a></p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/TomasiDeveloping/ExpensesTracker/commit/7671290d086466bd3fe985b9968e287ff7d69ca0">7671290d086466bd3fe985b9968e287ff7d69ca0</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (sweetalert2 version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [WS-2023-0250](https://github.com/advisories/GHSA-mrr8-v49w-3333) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | sweetalert2-11.7.3.tgz | Direct | N/A | ❌ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> WS-2023-0250</summary>
### Vulnerable Library - <b>sweetalert2-11.7.3.tgz</b></p>
<p></p>
<p>Library home page: <a href="https://registry.npmjs.org/sweetalert2/-/sweetalert2-11.7.3.tgz">https://registry.npmjs.org/sweetalert2/-/sweetalert2-11.7.3.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **sweetalert2-11.7.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TomasiDeveloping/ExpensesTracker/commit/7671290d086466bd3fe985b9968e287ff7d69ca0">7671290d086466bd3fe985b9968e287ff7d69ca0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
sweetalert2 versions 11.6.14 and above have potentially undesirable behavior. The package outputs audio and/or video messages that do not pertain to the functionality of the package when run on specific tlds. This functionality is documented on the project's readme
<p>Publish Date: 2023-07-10
<p>URL: <a href=https://github.com/advisories/GHSA-mrr8-v49w-3333>WS-2023-0250</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | non_main | tgz vulnerabilities highest severity is vulnerable library tgz library home page a href found in head commit a href vulnerabilities cve severity cvss dependency type fixed in version remediation possible medium tgz direct n a in some cases remediation pr cannot be created automatically for a vulnerability despite the availability of remediation details ws vulnerable library tgz library home page a href dependency hierarchy x tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions and above have potentially undesirable behavior the package outputs audio and or video messages that do not pertain to the functionality of the package when run on specific tlds this functionality is documented on the project s readme publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href step up your open source security game with mend | 0 |
198,341 | 14,974,024,920 | IssuesEvent | 2021-01-28 02:31:16 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | closed | An error dialog pops up when executing 'Propagate Access Control Lists…' for a SAS attached ADLS Gen2 blob container | :beetle: regression :gear: adls gen2 :heavy_check_mark: merged 🧪 testing | **Storage Explorer Version:** 1.17.0
**Build Number:** 20210127.3
**Branch:** main
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ MacOS Catalina
**Architecture:** ia32/x64
**Regression From:** Previous build (20210123.6)
**Steps to reproduce:**
1. Expand one ADLS Gen2 storage account -> Blob Containers.
2. Create a blob container -> Attach it via SAS with full permissions.
3. Right click the SAS attached blob container -> Click 'Propagate Access Control Lists...' -> Click 'OK'.
4. Check there no error dialog pops up.
**Expect Experience:**
No error dialog pops up.
**Actual Experience:**
An error dialog pops up.
 | 1.0 | An error dialog pops up when executing 'Propagate Access Control Lists…' for a SAS attached ADLS Gen2 blob container - **Storage Explorer Version:** 1.17.0
**Build Number:** 20210127.3
**Branch:** main
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ MacOS Catalina
**Architecture:** ia32/x64
**Regression From:** Previous build (20210123.6)
**Steps to reproduce:**
1. Expand one ADLS Gen2 storage account -> Blob Containers.
2. Create a blob container -> Attach it via SAS with full permissions.
3. Right click the SAS attached blob container -> Click 'Propagate Access Control Lists...' -> Click 'OK'.
4. Check there no error dialog pops up.
**Expect Experience:**
No error dialog pops up.
**Actual Experience:**
An error dialog pops up.
 | non_main | an error dialog pops up when executing propagate access control lists… for a sas attached adls blob container storage explorer version build number branch main platform os windows linux ubuntu macos catalina architecture regression from previous build steps to reproduce expand one adls storage account blob containers create a blob container attach it via sas with full permissions right click the sas attached blob container click propagate access control lists click ok check there no error dialog pops up expect experience no error dialog pops up actual experience an error dialog pops up | 0 |
314,577 | 23,528,973,224 | IssuesEvent | 2022-08-19 13:37:44 | vegaprotocol/specs | https://api.github.com/repos/vegaprotocol/specs | opened | New spec to detail behaviour of the candles subscription | documentation specs | To help develop and test the candles data, we need to write a spec we can agree on and use as the reference for future work and testing. | 1.0 | New spec to detail behaviour of the candles subscription - To help develop and test the candles data, we need to write a spec we can agree on and use as the reference for future work and testing. | non_main | new spec to detail behaviour of the candles subscription to help develop and test the candles data we need to write a spec we can agree on and use as the reference for future work and testing | 0 |
229,546 | 25,362,277,151 | IssuesEvent | 2022-11-21 01:02:34 | DavidSpek/pipelines | https://api.github.com/repos/DavidSpek/pipelines | opened | CVE-2022-41885 (Medium) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl | security vulnerability | ## CVE-2022-41885 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /contrib/components/openvino/ovms-deployer/containers/requirements.txt</p>
<p>Path to vulnerable library: /contrib/components/openvino/ovms-deployer/containers/requirements.txt,/samples/core/ai_platform/training</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DavidSpek/pipelines/commit/6f7433f006e282c4f25441e7502b80d73751e38f">6f7433f006e282c4f25441e7502b80d73751e38f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. When `tf.raw_ops.FusedResizeAndPadConv2D` is given a large tensor shape, it overflows. We have patched the issue in GitHub commit d66e1d568275e6a2947de97dca7a102a211e01ce. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.
<p>Publish Date: 2022-11-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41885>CVE-2022-41885</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-41885">https://www.cve.org/CVERecord?id=CVE-2022-41885</a></p>
<p>Release Date: 2022-11-18</p>
<p>Fix Resolution: tensorflow - 2.7.4, 2.8.1, 2.9.1, 2.10.0, tensorflow-cpu - 2.7.4, 2.8.1, 2.9.1, 2.10.0, tensorflow-gpu - 2.7.4, 2.8.1, 2.9.1, 2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-41885 (Medium) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2022-41885 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /contrib/components/openvino/ovms-deployer/containers/requirements.txt</p>
<p>Path to vulnerable library: /contrib/components/openvino/ovms-deployer/containers/requirements.txt,/samples/core/ai_platform/training</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DavidSpek/pipelines/commit/6f7433f006e282c4f25441e7502b80d73751e38f">6f7433f006e282c4f25441e7502b80d73751e38f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. When `tf.raw_ops.FusedResizeAndPadConv2D` is given a large tensor shape, it overflows. We have patched the issue in GitHub commit d66e1d568275e6a2947de97dca7a102a211e01ce. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.
<p>Publish Date: 2022-11-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41885>CVE-2022-41885</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-41885">https://www.cve.org/CVERecord?id=CVE-2022-41885</a></p>
<p>Release Date: 2022-11-18</p>
<p>Fix Resolution: tensorflow - 2.7.4, 2.8.1, 2.9.1, 2.10.0, tensorflow-cpu - 2.7.4, 2.8.1, 2.9.1, 2.10.0, tensorflow-gpu - 2.7.4, 2.8.1, 2.9.1, 2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file contrib components openvino ovms deployer containers requirements txt path to vulnerable library contrib components openvino ovms deployer containers requirements txt samples core ai platform training dependency hierarchy x tensorflow whl vulnerable library found in head commit a href found in base branch master vulnerability details tensorflow is an open source platform for machine learning when tf raw ops is given a large tensor shape it overflows we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend | 0 |
4,559 | 23,727,632,973 | IssuesEvent | 2022-08-30 21:15:46 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | sam local commands fail for Hello World lambda | area/local/invoke maintainer/need-followup | sam local functions all fail for default HelloWorld function, reporting "No response from invoke container for HelloWorldFunction" / "Invalid lambda response received: Lambda response must be valid json"
Environment:
* aws-cli : SAM CLI, version 1.18.0
* docker: Docker version 19.03.1, build 74b1e89e8a
* python 3.7.9
* Virtualbox 6.1.18
* Windows 7
Set Up:
Create the default HelloWorld application, using python3.7 :
`> sam init`
Template Source Choice 1: AWS Quick Start Templates
Package Choice 1: zip
Runtime Choice 9: Python 3.7
Template choice 1: Hello World example
`> sam build`
`> sam local invoke --debug`
```2021-02-12 14:57:04,629 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-02-12 14:57:04,630 | local invoke command is called
2021-02-12 14:57:04,643 | No Parameters detected in the template
2021-02-12 14:57:04,736 | 2 resources found in the template
2021-02-12 14:57:04,736 | Found Serverless function with name='HelloWorldFunction' and CodeUri='HelloWorldFunction'
2021-02-12 14:57:04,795 | Found one Lambda function with name 'HelloWorldFunction'
2021-02-12 14:57:04,795 | Invoking app.lambda_handler (python3.7)
2021-02-12 14:57:04,796 | No environment variables found for function 'HelloWorldFunction'
2021-02-12 14:57:04,796 | Environment variables overrides data is standard format
2021-02-12 14:57:04,797 | Loading AWS credentials from session with profile 'None'
2021-02-12 14:57:04,829 | Resolving code path. Cwd=F:\projects\sam-app\.aws-sam\build, CodeUri=HelloWorldFunction
2021-02-12 14:57:04,829 | Resolved absolute path to code is F:\projects\sam-app\.aws-sam\build\HelloWorldFunction
2021-02-12 14:57:04,830 | Code F:\projects\sam-app\.aws-sam\build\HelloWorldFunction is not a zip/jar file
2021-02-12 14:57:04,889 | Skip pulling image and use local one: amazon/aws-sam-cli-emulation-image-python3.7:rapid-1.18.0.
2021-02-12 14:57:04,890 | Mounting F:\projects\sam-app\.aws-sam\build\HelloWorldFunction as /var/task:ro,delegated inside runtime container
2021-02-12 14:57:05,375 | Starting a timer for 30 seconds for function 'HelloWorldFunction'
2021-02-12 14:57:09,258 | Cleaning all decompressed code dirs
2021-02-12 14:57:09,267 | No response from invoke container for HelloWorldFunction
2021-02-12 14:57:09,296 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '8ebfedc3-168e-4b85-a900-55cfe5379a91', 'installationId': '124efb15-d1bd-47bb-8b98-e654f948fb6d', 'sessionId': '353b9e74-a108-489e-9083-519c0dce5779', 'executionEnvironment': 'CL
I', 'pyversion': '3.8.7', 'samcliVersion': '1.18.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'duration': 4674, 'exitReason': 'success', 'exitCode': 0}}]}
2021-02-12 14:57:09,876 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
```
Running start api and using browser to access \hello reports similar error
` sam local start-api --debug`
```2021-02-12 15:02:01,721 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-02-12 15:02:02,327 | local start-api command is called
2021-02-12 15:02:02,339 | No Parameters detected in the template
2021-02-12 15:02:02,418 | 2 resources found in the template
2021-02-12 15:02:02,419 | Found Serverless function with name='HelloWorldFunction' and CodeUri='HelloWorldFunction'
2021-02-12 15:02:02,452 | No Parameters detected in the template
2021-02-12 15:02:02,515 | 2 resources found in the template
2021-02-12 15:02:02,516 | Found '1' API Events in Serverless function with name 'HelloWorldFunction'
2021-02-12 15:02:02,516 | Detected Inline Swagger definition
2021-02-12 15:02:02,517 | Lambda function integration not found in Swagger document at path='/hello' method='get'
2021-02-12 15:02:02,517 | Found '0' APIs in resource 'ServerlessRestApi'
2021-02-12 15:02:02,518 | Removed duplicates from '0' Explicit APIs and '1' Implicit APIs to produce '1' APIs
2021-02-12 15:02:02,518 | 1 APIs found in the template
2021-02-12 15:02:02,547 | Mounting HelloWorldFunction at http://127.0.0.1:3000/hello [GET]
2021-02-12 15:02:02,547 | You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2021-02-12 15:02:02,548 | Localhost server is starting up. Multi-threading = True
2021-02-12 15:02:02 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
2021-02-12 15:03:20,948 | Constructed String representation of Event to invoke Lambda. Event: {"body": null, "headers": {"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-GB,en;q=0.9,en-US;q=0.8", "Connection": "keep-alive", "Host": "127.0.0.1:3000", "Sec-Ch-Ua": "\"Chromium\";v=\"88\", \"Google Chrome\";v=\"88\", \";Not A Brand\";v=\"99\"", "Sec-Ch-Ua-Mobile": "?0", "Sec-Fetch-Dest": "document", "Sec-Fetch-Mode": "navigate", "Sec-Fetch-Site": "none", "Sec-Fetch-User": "?1", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36", "X-Forwarded-Port": "3000", "X-Forwarded-Proto": "http"}, "httpMethod": "GET", "isBase64Encoded": false, "multiValueHeaders": {"Accept": ["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"], "Accept-Encoding": ["gzip, deflate, br"], "Accept-Language": ["en-GB,en;q=0.9,en-US;q=0.8"], "Connection": ["keep-alive"], "Host": ["127.0.0.1:3000"], "Sec-Ch-Ua": ["\"Chromium\";v=\"88\", \"Google Chrome\";v=\"88\", \";Not A Brand\";v=\"99\""], "Sec-Ch-Ua-Mobile": ["?0"], "Sec-Fetch-Dest": ["document"], "Sec-Fetch-Mode": ["navigate"], "Sec-Fetch-Site": ["none"], "Sec-Fetch-User": ["?1"], "Upgrade-Insecure-Requests": ["1"], "User-Agent": ["Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36"], "X-Forwarded-Port": ["3000"], "X-Forwarded-Proto": ["http"]}, "multiValueQueryStringParameters": null, "path": "/hello", "pathParameters": null, "queryStringParameters": null, "requestContext": {"accountId": "123456789012", "apiId": "1234567890", "domainName": "127.0.0.1:3000", "extendedRequestId": null, "httpMethod": "GET", "identity": {"accountId": null, "apiKey": null, "caller": null, "cognitoAuthenticationProvider": null, "cognitoAuthenticationType": null, "cognitoIdentityPoolId": null, "sourceIp": "127.0.0.1", "user": null, "userAgent": "Custom User Agent String", "userArn": null}, "path": "/hello", "protocol": "HTTP/1.1", "requestId": "d282a32c-7b65-4a39-86ed-472b8dc50915", "requestTime": "12/Feb/2021:15:02:02 +0000", "requestTimeEpoch": 1613142122, "resourceId": "123456", "resourcePath": "/hello", "stage": "Prod"}, "resource": "/hello", "stageVariables": null, "version": "1.0"}
2021-02-12 15:03:20,953 | Found one Lambda function with name 'HelloWorldFunction'
2021-02-12 15:03:20,953 | Invoking app.lambda_handler (python3.7)
2021-02-12 15:03:20,954 | No environment variables found for function 'HelloWorldFunction'
2021-02-12 15:03:20,954 | Environment variables overrides data is standard format
2021-02-12 15:03:20,955 | Loading AWS credentials from session with profile 'None'
2021-02-12 15:03:20,985 | Resolving code path. Cwd=F:\projects\sam-app\.aws-sam\build, CodeUri=HelloWorldFunction
2021-02-12 15:03:20,985 | Resolved absolute path to code is F:\projects\sam-app\.aws-sam\build\HelloWorldFunction
2021-02-12 15:03:20,986 | Code F:\projects\sam-app\.aws-sam\build\HelloWorldFunction is not a zip/jar file
2021-02-12 15:03:21,032 | Skip pulling image and use local one: amazon/aws-sam-cli-emulation-image-python3.7:rapid-1.18.0.
2021-02-12 15:03:21,033 | Mounting F:\projects\sam-app\.aws-sam\build\HelloWorldFunction as /var/task:ro,delegated inside runtime container
2021-02-12 15:03:21,480 | Starting a timer for 30 seconds for function 'HelloWorldFunction'
2021-02-12 15:03:25,359 | Cleaning all decompressed code dirs
2021-02-12 15:03:25,360 | No response from invoke container for HelloWorldFunction
2021-02-12 15:03:25,361 | Invalid lambda response received: Lambda response must be valid json
2021-02-12 15:03:25 127.0.0.1 - - [12/Feb/2021 15:03:25] "GET /hello HTTP/1.1" 502 -
2021-02-12 15:03:25 127.0.0.1 - - [12/Feb/2021 15:03:25] "GET /favicon.ico HTTP/1.1" 403 -
```
Lambda can be deployed and run on AWS, but no local debugging or execution appears possible. This makes it very hard to develop functions.
(Note that timeout was increased to ensure any startup delay was not causing problems with execution). | True | sam local commands fail for Hello World lambda - sam local functions all fail for default HelloWorld function, reporting "No response from invoke container for HelloWorldFunction" / "Invalid lambda response received: Lambda response must be valid json"
Environment:
* aws-cli : SAM CLI, version 1.18.0
* docker: Docker version 19.03.1, build 74b1e89e8a
* python 3.7.9
* Virtualbox 6.1.18
* Windows 7
Set Up:
Create the default HelloWorld application, using python3.7 :
`> sam init`
Template Source Choice 1: AWS Quick Start Templates
Package Choice 1: zip
Runtime Choice 9: Python 3.7
Template choice 1: Hello World example
`> sam build`
`> sam local invoke --debug`
```2021-02-12 14:57:04,629 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-02-12 14:57:04,630 | local invoke command is called
2021-02-12 14:57:04,643 | No Parameters detected in the template
2021-02-12 14:57:04,736 | 2 resources found in the template
2021-02-12 14:57:04,736 | Found Serverless function with name='HelloWorldFunction' and CodeUri='HelloWorldFunction'
2021-02-12 14:57:04,795 | Found one Lambda function with name 'HelloWorldFunction'
2021-02-12 14:57:04,795 | Invoking app.lambda_handler (python3.7)
2021-02-12 14:57:04,796 | No environment variables found for function 'HelloWorldFunction'
2021-02-12 14:57:04,796 | Environment variables overrides data is standard format
2021-02-12 14:57:04,797 | Loading AWS credentials from session with profile 'None'
2021-02-12 14:57:04,829 | Resolving code path. Cwd=F:\projects\sam-app\.aws-sam\build, CodeUri=HelloWorldFunction
2021-02-12 14:57:04,829 | Resolved absolute path to code is F:\projects\sam-app\.aws-sam\build\HelloWorldFunction
2021-02-12 14:57:04,830 | Code F:\projects\sam-app\.aws-sam\build\HelloWorldFunction is not a zip/jar file
2021-02-12 14:57:04,889 | Skip pulling image and use local one: amazon/aws-sam-cli-emulation-image-python3.7:rapid-1.18.0.
2021-02-12 14:57:04,890 | Mounting F:\projects\sam-app\.aws-sam\build\HelloWorldFunction as /var/task:ro,delegated inside runtime container
2021-02-12 14:57:05,375 | Starting a timer for 30 seconds for function 'HelloWorldFunction'
2021-02-12 14:57:09,258 | Cleaning all decompressed code dirs
2021-02-12 14:57:09,267 | No response from invoke container for HelloWorldFunction
2021-02-12 14:57:09,296 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '8ebfedc3-168e-4b85-a900-55cfe5379a91', 'installationId': '124efb15-d1bd-47bb-8b98-e654f948fb6d', 'sessionId': '353b9e74-a108-489e-9083-519c0dce5779', 'executionEnvironment': 'CL
I', 'pyversion': '3.8.7', 'samcliVersion': '1.18.0', 'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam local invoke', 'duration': 4674, 'exitReason': 'success', 'exitCode': 0}}]}
2021-02-12 14:57:09,876 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
```
Running start api and using browser to access \hello reports similar error
` sam local start-api --debug`
```2021-02-12 15:02:01,721 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2021-02-12 15:02:02,327 | local start-api command is called
2021-02-12 15:02:02,339 | No Parameters detected in the template
2021-02-12 15:02:02,418 | 2 resources found in the template
2021-02-12 15:02:02,419 | Found Serverless function with name='HelloWorldFunction' and CodeUri='HelloWorldFunction'
2021-02-12 15:02:02,452 | No Parameters detected in the template
2021-02-12 15:02:02,515 | 2 resources found in the template
2021-02-12 15:02:02,516 | Found '1' API Events in Serverless function with name 'HelloWorldFunction'
2021-02-12 15:02:02,516 | Detected Inline Swagger definition
2021-02-12 15:02:02,517 | Lambda function integration not found in Swagger document at path='/hello' method='get'
2021-02-12 15:02:02,517 | Found '0' APIs in resource 'ServerlessRestApi'
2021-02-12 15:02:02,518 | Removed duplicates from '0' Explicit APIs and '1' Implicit APIs to produce '1' APIs
2021-02-12 15:02:02,518 | 1 APIs found in the template
2021-02-12 15:02:02,547 | Mounting HelloWorldFunction at http://127.0.0.1:3000/hello [GET]
2021-02-12 15:02:02,547 | You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2021-02-12 15:02:02,548 | Localhost server is starting up. Multi-threading = True
2021-02-12 15:02:02 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
2021-02-12 15:03:20,948 | Constructed String representation of Event to invoke Lambda. Event: {"body": null, "headers": {"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-GB,en;q=0.9,en-US;q=0.8", "Connection": "keep-alive", "Host": "127.0.0.1:3000", "Sec-Ch-Ua": "\"Chromium\";v=\"88\", \"Google Chrome\";v=\"88\", \";Not A Brand\";v=\"99\"", "Sec-Ch-Ua-Mobile": "?0", "Sec-Fetch-Dest": "document", "Sec-Fetch-Mode": "navigate", "Sec-Fetch-Site": "none", "Sec-Fetch-User": "?1", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36", "X-Forwarded-Port": "3000", "X-Forwarded-Proto": "http"}, "httpMethod": "GET", "isBase64Encoded": false, "multiValueHeaders": {"Accept": ["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"], "Accept-Encoding": ["gzip, deflate, br"], "Accept-Language": ["en-GB,en;q=0.9,en-US;q=0.8"], "Connection": ["keep-alive"], "Host": ["127.0.0.1:3000"], "Sec-Ch-Ua": ["\"Chromium\";v=\"88\", \"Google Chrome\";v=\"88\", \";Not A Brand\";v=\"99\""], "Sec-Ch-Ua-Mobile": ["?0"], "Sec-Fetch-Dest": ["document"], "Sec-Fetch-Mode": ["navigate"], "Sec-Fetch-Site": ["none"], "Sec-Fetch-User": ["?1"], "Upgrade-Insecure-Requests": ["1"], "User-Agent": ["Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36"], "X-Forwarded-Port": ["3000"], "X-Forwarded-Proto": ["http"]}, "multiValueQueryStringParameters": null, "path": "/hello", "pathParameters": null, "queryStringParameters": null, "requestContext": {"accountId": "123456789012", "apiId": "1234567890", "domainName": "127.0.0.1:3000", "extendedRequestId": null, "httpMethod": "GET", "identity": {"accountId": null, "apiKey": null, "caller": null, "cognitoAuthenticationProvider": null, "cognitoAuthenticationType": null, "cognitoIdentityPoolId": null, "sourceIp": "127.0.0.1", "user": null, "userAgent": "Custom User Agent String", "userArn": null}, "path": "/hello", "protocol": "HTTP/1.1", "requestId": "d282a32c-7b65-4a39-86ed-472b8dc50915", "requestTime": "12/Feb/2021:15:02:02 +0000", "requestTimeEpoch": 1613142122, "resourceId": "123456", "resourcePath": "/hello", "stage": "Prod"}, "resource": "/hello", "stageVariables": null, "version": "1.0"}
2021-02-12 15:03:20,953 | Found one Lambda function with name 'HelloWorldFunction'
2021-02-12 15:03:20,953 | Invoking app.lambda_handler (python3.7)
2021-02-12 15:03:20,954 | No environment variables found for function 'HelloWorldFunction'
2021-02-12 15:03:20,954 | Environment variables overrides data is standard format
2021-02-12 15:03:20,955 | Loading AWS credentials from session with profile 'None'
2021-02-12 15:03:20,985 | Resolving code path. Cwd=F:\projects\sam-app\.aws-sam\build, CodeUri=HelloWorldFunction
2021-02-12 15:03:20,985 | Resolved absolute path to code is F:\projects\sam-app\.aws-sam\build\HelloWorldFunction
2021-02-12 15:03:20,986 | Code F:\projects\sam-app\.aws-sam\build\HelloWorldFunction is not a zip/jar file
2021-02-12 15:03:21,032 | Skip pulling image and use local one: amazon/aws-sam-cli-emulation-image-python3.7:rapid-1.18.0.
2021-02-12 15:03:21,033 | Mounting F:\projects\sam-app\.aws-sam\build\HelloWorldFunction as /var/task:ro,delegated inside runtime container
2021-02-12 15:03:21,480 | Starting a timer for 30 seconds for function 'HelloWorldFunction'
2021-02-12 15:03:25,359 | Cleaning all decompressed code dirs
2021-02-12 15:03:25,360 | No response from invoke container for HelloWorldFunction
2021-02-12 15:03:25,361 | Invalid lambda response received: Lambda response must be valid json
2021-02-12 15:03:25 127.0.0.1 - - [12/Feb/2021 15:03:25] "GET /hello HTTP/1.1" 502 -
2021-02-12 15:03:25 127.0.0.1 - - [12/Feb/2021 15:03:25] "GET /favicon.ico HTTP/1.1" 403 -
```
Lambda can be deployed and run on AWS, but no local debugging or execution appears possible. This makes it very hard to develop functions.
(Note that timeout was increased to ensure any startup delay was not causing problems with execution). | main | sam local commands fail for hello world lambda sam local functions all fail for default helloworld function reporting no response from invoke container for helloworldfunction invalid lambda response received lambda response must be valid json environment aws cli sam cli version docker docker version build python virtualbox windows set up create the default helloworld application using sam init template source choice aws quick start templates package choice zip runtime choice python template choice hello world example sam build sam local invoke debug telemetry endpoint configured to be local invoke command is called no parameters detected in the template resources found in the template found serverless function with name helloworldfunction and codeuri helloworldfunction found one lambda function with name helloworldfunction invoking app lambda handler no environment variables found for function helloworldfunction environment variables overrides data is standard format loading aws credentials from session with profile none resolving code path cwd f projects sam app aws sam build codeuri helloworldfunction resolved absolute path to code is f projects sam app aws sam build helloworldfunction code f projects sam app aws sam build helloworldfunction is not a zip jar file skip pulling image and use local one amazon aws sam cli emulation image rapid mounting f projects sam app aws sam build helloworldfunction as var task ro delegated inside runtime container starting a timer for seconds for function helloworldfunction cleaning all decompressed code dirs no response from invoke container for helloworldfunction sending telemetry metrics commandrun requestid installationid sessionid executionenvironment cl i pyversion samcliversion awsprofileprovided false debugflagprovided true region commandname sam local invoke duration exitreason success exitcode httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout running start api and using browser to access hello reports similar error sam local start api debug telemetry endpoint configured to be local start api command is called no parameters detected in the template resources found in the template found serverless function with name helloworldfunction and codeuri helloworldfunction no parameters detected in the template resources found in the template found api events in serverless function with name helloworldfunction detected inline swagger definition lambda function integration not found in swagger document at path hello method get found apis in resource serverlessrestapi removed duplicates from explicit apis and implicit apis to produce apis apis found in the template mounting helloworldfunction at you can now browse to the above endpoints to invoke your functions you do not need to restart reload sam cli while working on your functions changes will be reflected instantly automatically you only need to restart sam cli if you update your aws sam template localhost server is starting up multi threading true running on press ctrl c to quit constructed string representation of event to invoke lambda event body null headers accept text html application xhtml xml application xml q image avif image webp image apng q application signed exchange v q accept encoding gzip deflate br accept language en gb en q en us q connection keep alive host sec ch ua chromium v google chrome v not a brand v sec ch ua mobile sec fetch dest document sec fetch mode navigate sec fetch site none sec fetch user upgrade insecure requests user agent mozilla windows nt applewebkit khtml like gecko chrome safari x forwarded port x forwarded proto http httpmethod get false multivalueheaders accept accept encoding accept language connection host sec ch ua sec ch ua mobile sec fetch dest sec fetch mode sec fetch site sec fetch user upgrade insecure requests user agent x forwarded port x forwarded proto multivaluequerystringparameters null path hello pathparameters null querystringparameters null requestcontext accountid apiid domainname extendedrequestid null httpmethod get identity accountid null apikey null caller null cognitoauthenticationprovider null cognitoauthenticationtype null cognitoidentitypoolid null sourceip user null useragent custom user agent string userarn null path hello protocol http requestid requesttime feb requesttimeepoch resourceid resourcepath hello stage prod resource hello stagevariables null version found one lambda function with name helloworldfunction invoking app lambda handler no environment variables found for function helloworldfunction environment variables overrides data is standard format loading aws credentials from session with profile none resolving code path cwd f projects sam app aws sam build codeuri helloworldfunction resolved absolute path to code is f projects sam app aws sam build helloworldfunction code f projects sam app aws sam build helloworldfunction is not a zip jar file skip pulling image and use local one amazon aws sam cli emulation image rapid mounting f projects sam app aws sam build helloworldfunction as var task ro delegated inside runtime container starting a timer for seconds for function helloworldfunction cleaning all decompressed code dirs no response from invoke container for helloworldfunction invalid lambda response received lambda response must be valid json get hello http get favicon ico http lambda can be deployed and run on aws but no local debugging or execution appears possible this makes it very hard to develop functions note that timeout was increased to ensure any startup delay was not causing problems with execution | 1 |
745,993 | 26,009,224,250 | IssuesEvent | 2022-12-20 22:57:37 | DanielWestberg/economicalc | https://api.github.com/repos/DanielWestberg/economicalc | closed | Process OCR data to filter out incorrect mappings | bug frontend HIGH PRIORITY | Example of incorrect item in a receipt:
{
"amount": 5.89,
"category": null,
"description": "Moms 12%",
"flags": "",
"qty": null,
"remarks": null,
"tags": null,
"unitPrice": null
} | 1.0 | Process OCR data to filter out incorrect mappings - Example of incorrect item in a receipt:
{
"amount": 5.89,
"category": null,
"description": "Moms 12%",
"flags": "",
"qty": null,
"remarks": null,
"tags": null,
"unitPrice": null
} | non_main | process ocr data to filter out incorrect mappings example of incorrect item in a receipt amount category null description moms flags qty null remarks null tags null unitprice null | 0 |
5,373 | 27,004,502,162 | IssuesEvent | 2023-02-10 10:31:44 | microcai/gentoo-zh | https://api.github.com/repos/microcai/gentoo-zh | closed | drop package: net-proxy/clash-for-windows-bin | maintainer-needed | 自己不用了,而且再加上最近连连出漏洞,我觉得如果没有别的人打算maintain,是否应该drop | True | drop package: net-proxy/clash-for-windows-bin - 自己不用了,而且再加上最近连连出漏洞,我觉得如果没有别的人打算maintain,是否应该drop | main | drop package net proxy clash for windows bin 自己不用了,而且再加上最近连连出漏洞,我觉得如果没有别的人打算maintain,是否应该drop | 1 |
4,120 | 19,539,427,432 | IssuesEvent | 2021-12-31 16:26:47 | asclepias/asclepias-broker | https://api.github.com/repos/asclepias/asclepias-broker | closed | Dev: Create weekly report/slackbot of ingestion/harvesting success/failure | Monitoring and Maintainence | Create monitoring view for weekly ingestion and harvesting. | True | Dev: Create weekly report/slackbot of ingestion/harvesting success/failure - Create monitoring view for weekly ingestion and harvesting. | main | dev create weekly report slackbot of ingestion harvesting success failure create monitoring view for weekly ingestion and harvesting | 1 |
43,123 | 17,410,084,074 | IssuesEvent | 2021-08-03 11:10:00 | terraform-providers/terraform-provider-azurerm | https://api.github.com/repos/terraform-providers/terraform-provider-azurerm | closed | Terrafrom Crashed when creating AzureRM web app | bug crash service/app-service | Code and trace below. This code has previously run, so I do not think it a persistent problem. At a guess a problem with azure state.
It has happened 3 times now on CI (azure devops) but not running locally.
Terraform v0.13.2
+ provider registry.terraform.io/hashicorp/azurerm v2.63.0
Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
#Create Azure Resource Group
resource "azurerm_resource_group" "AutoConference-RG" {
for_each = local.conferences
name = each.key
location = "West Europe"
tags = {
tier = each.value.tier
size = each.value.size
}
}
#Create Azure App Service Plan
resource "azurerm_app_service_plan" "AutoConference-ASP" {
for_each = azurerm_resource_group.AutoConference-RG
name = each.value.name
location = "West Europe"
resource_group_name = each.value.name
kind = "Windows"
sku {
tier = each.value.tags.tier
size = each.value.tags.size
}
}
#Create Azure App Service
resource "azurerm_app_service" "AutoConference-AS" {
for_each = azurerm_app_service_plan.AutoConference-ASP
depends_on = [azurerm_app_service_plan.AutoConference-ASP]
name = each.value.name
location = each.value.location
resource_group_name = each.value.name
app_service_plan_id = each.value.id
tags = {
"Acceptance" = "Test"
}
site_config {
dotnet_framework_version = "v5.0"
always_on = false
use_32_bit_worker_process = false
default_documents = []
}
app_settings = {
manual_integration = true
}
connection_string {
name = "AsecConn"
type = "SQLServer"
value = "Server=${each.value.name}.database.windows.net,1433; Database=Asec;User ID=xxxxxxxx;Password=xxxxxxxxxx;Trusted_Connection=False;Encrypt=True;"
}
connection_string {
name = "GrvRmsConn"
type = "SQLServer"
value = "Server=${each.value.name}.database.windows.net,1433; Database=GrvRms;User ID=xxxxxxx;Password=xxxxxxxxxxx;Trusted_Connection=False;Encrypt=True;"
}
}
#Create Azure SQL Server
resource "azurerm_sql_server" "test" {
for_each = azurerm_resource_group.AutoConference-RG
name = each.value.name
resource_group_name = each.value.name
location = each.value.location
version = "12.0"
administrator_login = "xxxxxxxx"
administrator_login_password = "xxxxxxx"
}
#Create Azure SQL Database
resource "azurerm_sql_database" "Asec" {
for_each = azurerm_sql_server.test
name = "Asec"
resource_group_name = each.value.name
location = each.value.location
server_name = each.value.name
requested_service_objective_name = "S0"
tags = {
environment = "production"
}
}
#Create Azure SQL Database
resource "azurerm_sql_database" "GrvRms" {
for_each = azurerm_sql_server.test
name = "GrvRms"
resource_group_name = each.value.name
location = each.value.location
server_name = each.value.name
requested_service_objective_name = "S0"
tags = {
environment = "production"
}
}
resource "azurerm_app_service_custom_hostname_binding" "appsvc" {
for_each = azurerm_app_service.AutoConference-AS
hostname = "${each.value.name}.jort.co.uk"
app_service_name = each.value.name
resource_group_name = each.value.name
#ssl_state = "SniEnabled"
#thumbprint = azurerm_app_service_certificate.foo.thumbprint
}
2021-07-15T19:44:57.1769044Z �[0m�[1mazurerm_resource_group.AutoConference-RG["jortytestyb"]: Creating...�[0m�[0m
2021-07-15T19:44:57.1771076Z �[0m�[1mazurerm_resource_group.AutoConference-RG["jortytestya"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5392440Z �[0m�[1mazurerm_resource_group.AutoConference-RG["jortytestyb"]: Creation complete after 1s [id=/subscriptions/be4455db-bf41-4de0-9b0c-5703f7bbe8f0/resourceGroups/jortytestyb]�[0m�[0m
2021-07-15T19:44:57.5438170Z �[0m�[1mazurerm_resource_group.AutoConference-RG["jortytestya"]: Creation complete after 1s [id=/subscriptions/be4455db-bf41-4de0-9b0c-5703f7bbe8f0/resourceGroups/jortytestya]�[0m�[0m
2021-07-15T19:44:57.5862888Z �[0m�[1mazurerm_sql_database.Asec["jortytestyb"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5864061Z �[0m�[1mazurerm_sql_database.GrvRms["jortytestya"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5910498Z �[0m�[1mazurerm_sql_database.GrvRms["jortytestyb"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5911112Z �[0m�[1mazurerm_app_service.AutoConference-AS["jortytestyb"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5955839Z �[0m�[1mazurerm_sql_database.Asec["jortytestya"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5956739Z �[0m�[1mazurerm_app_service.AutoConference-AS["jortytestya"]: Creating...�[0m�[0m
2021-07-15T19:44:57.8201183Z �[31m
2021-07-15T19:44:57.8201861Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8203234Z
2021-07-15T19:44:57.8203624Z �[0m�[0m�[0m
2021-07-15T19:44:57.8206556Z �[31m
2021-07-15T19:44:57.8207100Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8207488Z
2021-07-15T19:44:57.8207871Z �[0m�[0m�[0m
2021-07-15T19:44:57.8208161Z �[31m
2021-07-15T19:44:57.8208579Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8208875Z
2021-07-15T19:44:57.8210530Z �[0m�[0m�[0m
2021-07-15T19:44:57.8210908Z �[31m
2021-07-15T19:44:57.8211556Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8211912Z
2021-07-15T19:44:57.8212272Z �[0m�[0m�[0m
2021-07-15T19:44:57.8212642Z �[31m
2021-07-15T19:44:57.8213633Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8214546Z
2021-07-15T19:44:57.8230656Z �[0m�[0m�[0m
2021-07-15T19:44:57.8231161Z �[31m
2021-07-15T19:44:57.8231521Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8231745Z
2021-07-15T19:44:57.8232000Z �[0m�[0m�[0m
2021-07-15T19:44:57.8382267Z panic: runtime error: invalid memory address or nil pointer dereference
2021-07-15T19:44:57.8382870Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: [signal 0xc0000005 code=0x0 addr=0x20 pc=0x5c6e12f]
2021-07-15T19:44:57.8383486Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe:
2021-07-15T19:44:57.8384037Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: goroutine 189 [running]:
2021-07-15T19:44:57.8384832Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/web.resourceAppServiceCreate(0xc0021f1180, 0x603da40, 0xc000ac6e00, 0x0, 0x0)
2021-07-15T19:44:57.8386248Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/web/app_service_resource.go:242 +0x60f
2021-07-15T19:44:57.8387732Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).Apply(0xc000a7c5a0, 0xc00230f130, 0xc002330d00, 0x603da40, 0xc000ac6e00, 0x606ae01, 0xc002594e18, 0xc0025a8690)
2021-07-15T19:44:57.8389386Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/helper/schema/resource.go:320 +0x395
2021-07-15T19:44:57.8390575Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Provider).Apply(0xc0001c2c80, 0xc001a27a38, 0xc00230f130, 0xc002330d00, 0xc002595380, 0xc002586350, 0x606d3e0)
2021-07-15T19:44:57.8391593Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/helper/schema/provider.go:294 +0xa5
2021-07-15T19:44:57.8393181Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc000006430, 0x6f38750, 0xc002332990, 0xc0021f0cb0, 0xc000006430, 0xc002332990, 0xc001a1bba0)
2021-07-15T19:44:57.8394389Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin/grpc_provider.go:895 +0x8c5
2021-07-15T19:44:57.8395661Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x6524300, 0xc000006430, 0x6f38750, 0xc002332990, 0xc000ac0c00, 0x0, 0x6f38750, 0xc002332990, 0xc002348000, 0xd30)
2021-07-15T19:44:57.8397071Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5/tfplugin5.pb.go:3305 +0x222
2021-07-15T19:44:57.8398401Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: google.golang.org/grpc.(*Server).processUnaryRPC(0xc0005368c0, 0x6f80338, 0xc000159c80, 0xc000097a00, 0xc000acc960, 0xa392960, 0x0, 0x0, 0x0)
2021-07-15T19:44:57.8399330Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1194 +0x52b
2021-07-15T19:44:57.8400210Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: google.golang.org/grpc.(*Server).handleStream(0xc0005368c0, 0x6f80338, 0xc000159c80, 0xc000097a00, 0x0)
2021-07-15T19:44:57.8401088Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1517 +0xd0c
2021-07-15T19:44:57.8401969Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc0010b12c0, 0xc0005368c0, 0x6f80338, 0xc000159c80, 0xc000097a00)
2021-07-15T19:44:57.8402864Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:859 +0xb2
2021-07-15T19:44:57.8403676Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: created by google.golang.org/grpc.(*Server).serveStreams.func1
2021-07-15T19:44:57.8404663Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:857 +0x1fd
2021-07-15T19:44:57.8405661Z 2021/07/15 19:44:57 [DEBUG] azurerm_sql_database.GrvRms["jortytestya"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8406362Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8406934Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.GrvRms["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8407521Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8408115Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.GrvRms["jortytestya"]
2021-07-15T19:44:57.8408601Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8409147Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_sql_database.GrvRms["jortytestya"] has no state, so skipping provisioners
2021-07-15T19:44:57.8409752Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8410300Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.GrvRms["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8411977Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8412575Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.GrvRms["jortytestya"]
2021-07-15T19:44:57.8413094Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8413532Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8413989Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8414424Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8415013Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8416075Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8416693Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_sql_database.GrvRms["jortytestya"]
2021-07-15T19:44:57.8417500Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_sql_database.GrvRms["jortytestya"]": visit complete
2021-07-15T19:44:57.8418241Z 2021/07/15 19:44:57 [DEBUG] azurerm_sql_database.GrvRms["jortytestyb"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8418930Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8419483Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.GrvRms["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8420011Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8420498Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.GrvRms["jortytestyb"]
2021-07-15T19:44:57.8421199Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8421727Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_sql_database.GrvRms["jortytestyb"] has no state, so skipping provisioners
2021-07-15T19:44:57.8422274Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8422827Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.GrvRms["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8423394Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8423892Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.GrvRms["jortytestyb"]
2021-07-15T19:44:57.8424357Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8424752Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8425153Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8425541Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8426067Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8426679Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8427404Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_sql_database.GrvRms["jortytestyb"]
2021-07-15T19:44:57.8428114Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_sql_database.GrvRms["jortytestyb"]": visit complete
2021-07-15T19:44:57.8428899Z 2021/07/15 19:44:57 [DEBUG] azurerm_app_service.AutoConference-AS["jortytestyb"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8429691Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8430281Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_app_service.AutoConference-AS["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8430839Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8431355Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_app_service.AutoConference-AS["jortytestyb"]
2021-07-15T19:44:57.8431882Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8432428Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_app_service.AutoConference-AS["jortytestyb"] has no state, so skipping provisioners
2021-07-15T19:44:57.8433143Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8433716Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_app_service.AutoConference-AS["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8434254Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8434753Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_app_service.AutoConference-AS["jortytestyb"]
2021-07-15T19:44:57.8435451Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8435848Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8436254Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8436643Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8437371Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8438015Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8438590Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_app_service.AutoConference-AS["jortytestyb"]
2021-07-15T19:44:57.8439176Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_app_service.AutoConference-AS["jortytestyb"]": visit complete
2021-07-15T19:44:57.8439802Z 2021-07-15T19:44:57.808Z [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-07-15T19:44:57.8440634Z 2021/07/15 19:44:57 [DEBUG] azurerm_sql_database.Asec["jortytestyb"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8441318Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8441848Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.Asec["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8442396Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8442884Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.Asec["jortytestyb"]
2021-07-15T19:44:57.8443494Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8444033Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_sql_database.Asec["jortytestyb"] has no state, so skipping provisioners
2021-07-15T19:44:57.8444549Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8445080Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.Asec["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8446252Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8446751Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.Asec["jortytestyb"]
2021-07-15T19:44:57.8447494Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8447909Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8448275Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8448676Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8449188Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8449780Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8450359Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_sql_database.Asec["jortytestyb"]
2021-07-15T19:44:57.8451078Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_sql_database.Asec["jortytestyb"]": visit complete
2021-07-15T19:44:57.8451788Z 2021/07/15 19:44:57 [DEBUG] azurerm_sql_database.Asec["jortytestya"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8452450Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8452984Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.Asec["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8453486Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8454402Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.Asec["jortytestya"]
2021-07-15T19:44:57.8454887Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8455438Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_sql_database.Asec["jortytestya"] has no state, so skipping provisioners
2021-07-15T19:44:57.8456166Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8456940Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.Asec["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8458031Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8458515Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.Asec["jortytestya"]
2021-07-15T19:44:57.8459120Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8459660Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8460020Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8460419Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8461086Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8461658Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8462218Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_sql_database.Asec["jortytestya"]
2021-07-15T19:44:57.8462742Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_sql_database.Asec["jortytestya"]": visit complete
2021-07-15T19:44:57.8463666Z 2021/07/15 19:44:57 [DEBUG] azurerm_app_service.AutoConference-AS["jortytestya"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8464699Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8465268Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_app_service.AutoConference-AS["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8465888Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8466521Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_app_service.AutoConference-AS["jortytestya"]
2021-07-15T19:44:57.8467234Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8467767Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_app_service.AutoConference-AS["jortytestya"] has no state, so skipping provisioners
2021-07-15T19:44:57.8468521Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8469374Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_app_service.AutoConference-AS["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8470297Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8471095Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_app_service.AutoConference-AS["jortytestya"]
2021-07-15T19:44:57.8472065Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8472653Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8473673Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8474631Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8475473Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8476377Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8477332Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_app_service.AutoConference-AS["jortytestya"]
2021-07-15T19:44:57.8478711Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_app_service.AutoConference-AS["jortytestya"]": visit complete
2021-07-15T19:44:57.8479614Z 2021/07/15 19:44:57 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
2021-07-15T19:44:57.8480496Z 2021/07/15 19:44:57 [TRACE] dag/walk: upstream of "provider["registry.terraform.io/hashicorp/azurerm"] (close)" errored, so skipping
2021-07-15T19:44:57.8481413Z 2021/07/15 19:44:57 [TRACE] dag/walk: upstream of "root" errored, so skipping
2021-07-15T19:44:57.8482257Z 2021/07/15 19:44:57 [TRACE] statemgr.Filesystem: have already backed up original terraform.tfstate to terraform.tfstate.backup on a previous write
2021-07-15T19:44:57.8483242Z 2021/07/15 19:44:57 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 131
2021-07-15T19:44:57.8484146Z 2021/07/15 19:44:57 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2021-07-15T19:44:57.8485003Z 2021/07/15 19:44:57 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2021-07-15T19:44:57.8485652Z 2021/07/15 19:44:57 [TRACE] statemgr.Filesystem: unlocked by closing terraform.tfstate
2021-07-15T19:44:57.8486385Z 2021-07-15T19:44:57.823Z [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/hashicorp/azurerm/2.63.0/windows_amd64/terraform-provider-azurerm_v2.63.0_x5.exe pid=1412 error="exit status 2"
2021-07-15T19:44:57.8487095Z 2021-07-15T19:44:57.823Z [DEBUG] plugin: plugin exited
2021-07-15T19:44:57.8487303Z
2021-07-15T19:44:57.8487421Z
2021-07-15T19:44:57.8488056Z
2021-07-15T19:44:57.8488492Z !!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
2021-07-15T19:44:57.8488683Z
2021-07-15T19:44:57.8489030Z Terraform crashed! This is always indicative of a bug within Terraform.
2021-07-15T19:44:57.8489583Z A crash log has been placed at "crash.log" relative to your current
2021-07-15T19:44:57.8506145Z working directory. It would be immensely helpful if you could please
2021-07-15T19:44:57.8506972Z report the crash with Terraform[1] so that we can fix this.
2021-07-15T19:44:57.8508270Z
2021-07-15T19:44:57.8508880Z When reporting bugs, please include your terraform version. That
2021-07-15T19:44:57.8509691Z information is available on the first line of crash.log. You can also
2021-07-15T19:44:57.8510188Z get it by running 'terraform --version' on the command line.
2021-07-15T19:44:57.8511079Z
2021-07-15T19:44:57.8511515Z SECURITY WARNING: the "crash.log" file that was created may contain
2021-07-15T19:44:57.8513159Z sensitive information that must be redacted before it is safe to share
2021-07-15T19:44:57.8531610Z on the issue tracker.
2021-07-15T19:44:57.8531952Z
2021-07-15T19:44:57.8532720Z [1]: https://github.com/hashicorp/terraform/issues
2021-07-15T19:44:57.8532995Z
2021-07-15T19:44:57.8533739Z !!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!! | 2.0 | Terrafrom Crashed when creating AzureRM web app - Code and trace below. This code has previously run, so I do not think it a persistent problem. At a guess a problem with azure state.
It has happened 3 times now on CI (azure devops) but not running locally.
Terraform v0.13.2
+ provider registry.terraform.io/hashicorp/azurerm v2.63.0
Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}
#Create Azure Resource Group
resource "azurerm_resource_group" "AutoConference-RG" {
for_each = local.conferences
name = each.key
location = "West Europe"
tags = {
tier = each.value.tier
size = each.value.size
}
}
#Create Azure App Service Plan
resource "azurerm_app_service_plan" "AutoConference-ASP" {
for_each = azurerm_resource_group.AutoConference-RG
name = each.value.name
location = "West Europe"
resource_group_name = each.value.name
kind = "Windows"
sku {
tier = each.value.tags.tier
size = each.value.tags.size
}
}
#Create Azure App Service
resource "azurerm_app_service" "AutoConference-AS" {
for_each = azurerm_app_service_plan.AutoConference-ASP
depends_on = [azurerm_app_service_plan.AutoConference-ASP]
name = each.value.name
location = each.value.location
resource_group_name = each.value.name
app_service_plan_id = each.value.id
tags = {
"Acceptance" = "Test"
}
site_config {
dotnet_framework_version = "v5.0"
always_on = false
use_32_bit_worker_process = false
default_documents = []
}
app_settings = {
manual_integration = true
}
connection_string {
name = "AsecConn"
type = "SQLServer"
value = "Server=${each.value.name}.database.windows.net,1433; Database=Asec;User ID=xxxxxxxx;Password=xxxxxxxxxx;Trusted_Connection=False;Encrypt=True;"
}
connection_string {
name = "GrvRmsConn"
type = "SQLServer"
value = "Server=${each.value.name}.database.windows.net,1433; Database=GrvRms;User ID=xxxxxxx;Password=xxxxxxxxxxx;Trusted_Connection=False;Encrypt=True;"
}
}
#Create Azure SQL Server
resource "azurerm_sql_server" "test" {
for_each = azurerm_resource_group.AutoConference-RG
name = each.value.name
resource_group_name = each.value.name
location = each.value.location
version = "12.0"
administrator_login = "xxxxxxxx"
administrator_login_password = "xxxxxxx"
}
#Create Azure SQL Database
resource "azurerm_sql_database" "Asec" {
for_each = azurerm_sql_server.test
name = "Asec"
resource_group_name = each.value.name
location = each.value.location
server_name = each.value.name
requested_service_objective_name = "S0"
tags = {
environment = "production"
}
}
#Create Azure SQL Database
resource "azurerm_sql_database" "GrvRms" {
for_each = azurerm_sql_server.test
name = "GrvRms"
resource_group_name = each.value.name
location = each.value.location
server_name = each.value.name
requested_service_objective_name = "S0"
tags = {
environment = "production"
}
}
resource "azurerm_app_service_custom_hostname_binding" "appsvc" {
for_each = azurerm_app_service.AutoConference-AS
hostname = "${each.value.name}.jort.co.uk"
app_service_name = each.value.name
resource_group_name = each.value.name
#ssl_state = "SniEnabled"
#thumbprint = azurerm_app_service_certificate.foo.thumbprint
}
2021-07-15T19:44:57.1769044Z �[0m�[1mazurerm_resource_group.AutoConference-RG["jortytestyb"]: Creating...�[0m�[0m
2021-07-15T19:44:57.1771076Z �[0m�[1mazurerm_resource_group.AutoConference-RG["jortytestya"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5392440Z �[0m�[1mazurerm_resource_group.AutoConference-RG["jortytestyb"]: Creation complete after 1s [id=/subscriptions/be4455db-bf41-4de0-9b0c-5703f7bbe8f0/resourceGroups/jortytestyb]�[0m�[0m
2021-07-15T19:44:57.5438170Z �[0m�[1mazurerm_resource_group.AutoConference-RG["jortytestya"]: Creation complete after 1s [id=/subscriptions/be4455db-bf41-4de0-9b0c-5703f7bbe8f0/resourceGroups/jortytestya]�[0m�[0m
2021-07-15T19:44:57.5862888Z �[0m�[1mazurerm_sql_database.Asec["jortytestyb"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5864061Z �[0m�[1mazurerm_sql_database.GrvRms["jortytestya"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5910498Z �[0m�[1mazurerm_sql_database.GrvRms["jortytestyb"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5911112Z �[0m�[1mazurerm_app_service.AutoConference-AS["jortytestyb"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5955839Z �[0m�[1mazurerm_sql_database.Asec["jortytestya"]: Creating...�[0m�[0m
2021-07-15T19:44:57.5956739Z �[0m�[1mazurerm_app_service.AutoConference-AS["jortytestya"]: Creating...�[0m�[0m
2021-07-15T19:44:57.8201183Z �[31m
2021-07-15T19:44:57.8201861Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8203234Z
2021-07-15T19:44:57.8203624Z �[0m�[0m�[0m
2021-07-15T19:44:57.8206556Z �[31m
2021-07-15T19:44:57.8207100Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8207488Z
2021-07-15T19:44:57.8207871Z �[0m�[0m�[0m
2021-07-15T19:44:57.8208161Z �[31m
2021-07-15T19:44:57.8208579Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8208875Z
2021-07-15T19:44:57.8210530Z �[0m�[0m�[0m
2021-07-15T19:44:57.8210908Z �[31m
2021-07-15T19:44:57.8211556Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8211912Z
2021-07-15T19:44:57.8212272Z �[0m�[0m�[0m
2021-07-15T19:44:57.8212642Z �[31m
2021-07-15T19:44:57.8213633Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8214546Z
2021-07-15T19:44:57.8230656Z �[0m�[0m�[0m
2021-07-15T19:44:57.8231161Z �[31m
2021-07-15T19:44:57.8231521Z �[1m�[31mError: �[0m�[0m�[1mrpc error: code = Unavailable desc = transport is closing�[0m
2021-07-15T19:44:57.8231745Z
2021-07-15T19:44:57.8232000Z �[0m�[0m�[0m
2021-07-15T19:44:57.8382267Z panic: runtime error: invalid memory address or nil pointer dereference
2021-07-15T19:44:57.8382870Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: [signal 0xc0000005 code=0x0 addr=0x20 pc=0x5c6e12f]
2021-07-15T19:44:57.8383486Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe:
2021-07-15T19:44:57.8384037Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: goroutine 189 [running]:
2021-07-15T19:44:57.8384832Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/web.resourceAppServiceCreate(0xc0021f1180, 0x603da40, 0xc000ac6e00, 0x0, 0x0)
2021-07-15T19:44:57.8386248Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/azurerm/internal/services/web/app_service_resource.go:242 +0x60f
2021-07-15T19:44:57.8387732Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).Apply(0xc000a7c5a0, 0xc00230f130, 0xc002330d00, 0x603da40, 0xc000ac6e00, 0x606ae01, 0xc002594e18, 0xc0025a8690)
2021-07-15T19:44:57.8389386Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/helper/schema/resource.go:320 +0x395
2021-07-15T19:44:57.8390575Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Provider).Apply(0xc0001c2c80, 0xc001a27a38, 0xc00230f130, 0xc002330d00, 0xc002595380, 0xc002586350, 0x606d3e0)
2021-07-15T19:44:57.8391593Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/helper/schema/provider.go:294 +0xa5
2021-07-15T19:44:57.8393181Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc000006430, 0x6f38750, 0xc002332990, 0xc0021f0cb0, 0xc000006430, 0xc002332990, 0xc001a1bba0)
2021-07-15T19:44:57.8394389Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin/grpc_provider.go:895 +0x8c5
2021-07-15T19:44:57.8395661Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x6524300, 0xc000006430, 0x6f38750, 0xc002332990, 0xc000ac0c00, 0x0, 0x6f38750, 0xc002332990, 0xc002348000, 0xd30)
2021-07-15T19:44:57.8397071Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5/tfplugin5.pb.go:3305 +0x222
2021-07-15T19:44:57.8398401Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: google.golang.org/grpc.(*Server).processUnaryRPC(0xc0005368c0, 0x6f80338, 0xc000159c80, 0xc000097a00, 0xc000acc960, 0xa392960, 0x0, 0x0, 0x0)
2021-07-15T19:44:57.8399330Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1194 +0x52b
2021-07-15T19:44:57.8400210Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: google.golang.org/grpc.(*Server).handleStream(0xc0005368c0, 0x6f80338, 0xc000159c80, 0xc000097a00, 0x0)
2021-07-15T19:44:57.8401088Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:1517 +0xd0c
2021-07-15T19:44:57.8401969Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc0010b12c0, 0xc0005368c0, 0x6f80338, 0xc000159c80, 0xc000097a00)
2021-07-15T19:44:57.8402864Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:859 +0xb2
2021-07-15T19:44:57.8403676Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: created by google.golang.org/grpc.(*Server).serveStreams.func1
2021-07-15T19:44:57.8404663Z 2021-07-15T19:44:57.806Z [DEBUG] plugin.terraform-provider-azurerm_v2.63.0_x5.exe: /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-azurerm/vendor/google.golang.org/grpc/server.go:857 +0x1fd
2021-07-15T19:44:57.8405661Z 2021/07/15 19:44:57 [DEBUG] azurerm_sql_database.GrvRms["jortytestya"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8406362Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8406934Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.GrvRms["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8407521Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8408115Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.GrvRms["jortytestya"]
2021-07-15T19:44:57.8408601Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8409147Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_sql_database.GrvRms["jortytestya"] has no state, so skipping provisioners
2021-07-15T19:44:57.8409752Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8410300Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.GrvRms["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8411977Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8412575Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.GrvRms["jortytestya"]
2021-07-15T19:44:57.8413094Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8413532Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8413989Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8414424Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8415013Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8416075Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8416693Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_sql_database.GrvRms["jortytestya"]
2021-07-15T19:44:57.8417500Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_sql_database.GrvRms["jortytestya"]": visit complete
2021-07-15T19:44:57.8418241Z 2021/07/15 19:44:57 [DEBUG] azurerm_sql_database.GrvRms["jortytestyb"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8418930Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8419483Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.GrvRms["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8420011Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8420498Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.GrvRms["jortytestyb"]
2021-07-15T19:44:57.8421199Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8421727Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_sql_database.GrvRms["jortytestyb"] has no state, so skipping provisioners
2021-07-15T19:44:57.8422274Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8422827Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.GrvRms["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8423394Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8423892Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.GrvRms["jortytestyb"]
2021-07-15T19:44:57.8424357Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8424752Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8425153Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8425541Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8426067Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8426679Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8427404Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_sql_database.GrvRms["jortytestyb"]
2021-07-15T19:44:57.8428114Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_sql_database.GrvRms["jortytestyb"]": visit complete
2021-07-15T19:44:57.8428899Z 2021/07/15 19:44:57 [DEBUG] azurerm_app_service.AutoConference-AS["jortytestyb"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8429691Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8430281Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_app_service.AutoConference-AS["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8430839Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8431355Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_app_service.AutoConference-AS["jortytestyb"]
2021-07-15T19:44:57.8431882Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8432428Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_app_service.AutoConference-AS["jortytestyb"] has no state, so skipping provisioners
2021-07-15T19:44:57.8433143Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8433716Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_app_service.AutoConference-AS["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8434254Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8434753Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_app_service.AutoConference-AS["jortytestyb"]
2021-07-15T19:44:57.8435451Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8435848Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8436254Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8436643Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8437371Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8438015Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8438590Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_app_service.AutoConference-AS["jortytestyb"]
2021-07-15T19:44:57.8439176Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_app_service.AutoConference-AS["jortytestyb"]": visit complete
2021-07-15T19:44:57.8439802Z 2021-07-15T19:44:57.808Z [WARN] plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2021-07-15T19:44:57.8440634Z 2021/07/15 19:44:57 [DEBUG] azurerm_sql_database.Asec["jortytestyb"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8441318Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8441848Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.Asec["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8442396Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8442884Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.Asec["jortytestyb"]
2021-07-15T19:44:57.8443494Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8444033Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_sql_database.Asec["jortytestyb"] has no state, so skipping provisioners
2021-07-15T19:44:57.8444549Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8445080Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.Asec["jortytestyb"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8446252Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8446751Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.Asec["jortytestyb"]
2021-07-15T19:44:57.8447494Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8447909Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8448275Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8448676Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8449188Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8449780Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8450359Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_sql_database.Asec["jortytestyb"]
2021-07-15T19:44:57.8451078Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_sql_database.Asec["jortytestyb"]": visit complete
2021-07-15T19:44:57.8451788Z 2021/07/15 19:44:57 [DEBUG] azurerm_sql_database.Asec["jortytestya"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8452450Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8452984Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.Asec["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8453486Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8454402Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.Asec["jortytestya"]
2021-07-15T19:44:57.8454887Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8455438Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_sql_database.Asec["jortytestya"] has no state, so skipping provisioners
2021-07-15T19:44:57.8456166Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8456940Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_sql_database.Asec["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8458031Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8458515Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_sql_database.Asec["jortytestya"]
2021-07-15T19:44:57.8459120Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8459660Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8460020Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8460419Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8461086Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8461658Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8462218Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_sql_database.Asec["jortytestya"]
2021-07-15T19:44:57.8462742Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_sql_database.Asec["jortytestya"]": visit complete
2021-07-15T19:44:57.8463666Z 2021/07/15 19:44:57 [DEBUG] azurerm_app_service.AutoConference-AS["jortytestya"]: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8464699Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8465268Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_app_service.AutoConference-AS["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8465888Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8466521Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_app_service.AutoConference-AS["jortytestya"]
2021-07-15T19:44:57.8467234Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyProvisioners
2021-07-15T19:44:57.8467767Z 2021/07/15 19:44:57 [TRACE] EvalApplyProvisioners: azurerm_app_service.AutoConference-AS["jortytestya"] has no state, so skipping provisioners
2021-07-15T19:44:57.8468521Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalMaybeTainted
2021-07-15T19:44:57.8469374Z 2021/07/15 19:44:57 [TRACE] EvalMaybeTainted: azurerm_app_service.AutoConference-AS["jortytestya"] encountered an error during creation, so it is now marked as tainted
2021-07-15T19:44:57.8470297Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteState
2021-07-15T19:44:57.8471095Z 2021/07/15 19:44:57 [TRACE] EvalWriteState: removing state object for azurerm_app_service.AutoConference-AS["jortytestya"]
2021-07-15T19:44:57.8472065Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8472653Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalIf
2021-07-15T19:44:57.8473673Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalWriteDiff
2021-07-15T19:44:57.8474631Z 2021/07/15 19:44:57 [TRACE] eval: *terraform.EvalApplyPost
2021-07-15T19:44:57.8475473Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8476377Z 2021/07/15 19:44:57 [ERROR] eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2021-07-15T19:44:57.8477332Z 2021/07/15 19:44:57 [TRACE] [walkApply] Exiting eval tree: azurerm_app_service.AutoConference-AS["jortytestya"]
2021-07-15T19:44:57.8478711Z 2021/07/15 19:44:57 [TRACE] vertex "azurerm_app_service.AutoConference-AS["jortytestya"]": visit complete
2021-07-15T19:44:57.8479614Z 2021/07/15 19:44:57 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
2021-07-15T19:44:57.8480496Z 2021/07/15 19:44:57 [TRACE] dag/walk: upstream of "provider["registry.terraform.io/hashicorp/azurerm"] (close)" errored, so skipping
2021-07-15T19:44:57.8481413Z 2021/07/15 19:44:57 [TRACE] dag/walk: upstream of "root" errored, so skipping
2021-07-15T19:44:57.8482257Z 2021/07/15 19:44:57 [TRACE] statemgr.Filesystem: have already backed up original terraform.tfstate to terraform.tfstate.backup on a previous write
2021-07-15T19:44:57.8483242Z 2021/07/15 19:44:57 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 131
2021-07-15T19:44:57.8484146Z 2021/07/15 19:44:57 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2021-07-15T19:44:57.8485003Z 2021/07/15 19:44:57 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2021-07-15T19:44:57.8485652Z 2021/07/15 19:44:57 [TRACE] statemgr.Filesystem: unlocked by closing terraform.tfstate
2021-07-15T19:44:57.8486385Z 2021-07-15T19:44:57.823Z [DEBUG] plugin: plugin process exited: path=.terraform/plugins/registry.terraform.io/hashicorp/azurerm/2.63.0/windows_amd64/terraform-provider-azurerm_v2.63.0_x5.exe pid=1412 error="exit status 2"
2021-07-15T19:44:57.8487095Z 2021-07-15T19:44:57.823Z [DEBUG] plugin: plugin exited
2021-07-15T19:44:57.8487303Z
2021-07-15T19:44:57.8487421Z
2021-07-15T19:44:57.8488056Z
2021-07-15T19:44:57.8488492Z !!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
2021-07-15T19:44:57.8488683Z
2021-07-15T19:44:57.8489030Z Terraform crashed! This is always indicative of a bug within Terraform.
2021-07-15T19:44:57.8489583Z A crash log has been placed at "crash.log" relative to your current
2021-07-15T19:44:57.8506145Z working directory. It would be immensely helpful if you could please
2021-07-15T19:44:57.8506972Z report the crash with Terraform[1] so that we can fix this.
2021-07-15T19:44:57.8508270Z
2021-07-15T19:44:57.8508880Z When reporting bugs, please include your terraform version. That
2021-07-15T19:44:57.8509691Z information is available on the first line of crash.log. You can also
2021-07-15T19:44:57.8510188Z get it by running 'terraform --version' on the command line.
2021-07-15T19:44:57.8511079Z
2021-07-15T19:44:57.8511515Z SECURITY WARNING: the "crash.log" file that was created may contain
2021-07-15T19:44:57.8513159Z sensitive information that must be redacted before it is safe to share
2021-07-15T19:44:57.8531610Z on the issue tracker.
2021-07-15T19:44:57.8531952Z
2021-07-15T19:44:57.8532720Z [1]: https://github.com/hashicorp/terraform/issues
2021-07-15T19:44:57.8532995Z
2021-07-15T19:44:57.8533739Z !!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!! | non_main | terrafrom crashed when creating azurerm web app code and trace below this code has previously run so i do not think it a persistent problem at a guess a problem with azure state it has happened times now on ci azure devops but not running locally terraform provider registry terraform io hashicorp azurerm configure the microsoft azure provider provider azurerm features create azure resource group resource azurerm resource group autoconference rg for each local conferences name each key location west europe tags tier each value tier size each value size create azure app service plan resource azurerm app service plan autoconference asp for each azurerm resource group autoconference rg name each value name location west europe resource group name each value name kind windows sku tier each value tags tier size each value tags size create azure app service resource azurerm app service autoconference as for each azurerm app service plan autoconference asp depends on name each value name location each value location resource group name each value name app service plan id each value id tags acceptance test site config dotnet framework version always on false use bit worker process false default documents app settings manual integration true connection string name asecconn type sqlserver value server each value name database windows net database asec user id xxxxxxxx password xxxxxxxxxx trusted connection false encrypt true connection string name grvrmsconn type sqlserver value server each value name database windows net database grvrms user id xxxxxxx password xxxxxxxxxxx trusted connection false encrypt true create azure sql server resource azurerm sql server test for each azurerm resource group autoconference rg name each value name resource group name each value name location each value location version administrator login xxxxxxxx administrator login password xxxxxxx create azure sql database resource azurerm sql database asec for each azurerm sql server test name asec resource group name each value name location each value location server name each value name requested service objective name tags environment production create azure sql database resource azurerm sql database grvrms for each azurerm sql server test name grvrms resource group name each value name location each value location server name each value name requested service objective name tags environment production resource azurerm app service custom hostname binding appsvc for each azurerm app service autoconference as hostname each value name jort co uk app service name each value name resource group name each value name ssl state snienabled thumbprint azurerm app service certificate foo thumbprint � creating � � � creating � � � creation complete after � � � creation complete after � � � creating � � � creating � � � creating � � � creating � � � creating � � � creating � � � � � � � � error code unavailable desc transport is closing� � � � � � � � � � error code unavailable desc transport is closing� � � � � � � � � � error code unavailable desc transport is closing� � � � � � � � � � error code unavailable desc transport is closing� � � � � � � � � � error code unavailable desc transport is closing� � � � � � � � � � error code unavailable desc transport is closing� � � � panic runtime error invalid memory address or nil pointer dereference plugin terraform provider azurerm exe plugin terraform provider azurerm exe plugin terraform provider azurerm exe goroutine plugin terraform provider azurerm exe github com terraform providers terraform provider azurerm azurerm internal services web resourceappservicecreate plugin terraform provider azurerm exe opt teamcity agent work src github com terraform providers terraform provider azurerm azurerm internal services web app service resource go plugin terraform provider azurerm exe github com hashicorp terraform plugin sdk helper schema resource apply plugin terraform provider azurerm exe opt teamcity agent work src github com terraform providers terraform provider azurerm vendor github com hashicorp terraform plugin sdk helper schema resource go plugin terraform provider azurerm exe github com hashicorp terraform plugin sdk helper schema provider apply plugin terraform provider azurerm exe opt teamcity agent work src github com terraform providers terraform provider azurerm vendor github com hashicorp terraform plugin sdk helper schema provider go plugin terraform provider azurerm exe github com hashicorp terraform plugin sdk internal helper plugin grpcproviderserver applyresourcechange plugin terraform provider azurerm exe opt teamcity agent work src github com terraform providers terraform provider azurerm vendor github com hashicorp terraform plugin sdk internal helper plugin grpc provider go plugin terraform provider azurerm exe github com hashicorp terraform plugin sdk internal provider applyresourcechange handler plugin terraform provider azurerm exe opt teamcity agent work src github com terraform providers terraform provider azurerm vendor github com hashicorp terraform plugin sdk internal pb go plugin terraform provider azurerm exe google golang org grpc server processunaryrpc plugin terraform provider azurerm exe opt teamcity agent work src github com terraform providers terraform provider azurerm vendor google golang org grpc server go plugin terraform provider azurerm exe google golang org grpc server handlestream plugin terraform provider azurerm exe opt teamcity agent work src github com terraform providers terraform provider azurerm vendor google golang org grpc server go plugin terraform provider azurerm exe google golang org grpc server servestreams plugin terraform provider azurerm exe opt teamcity agent work src github com terraform providers terraform provider azurerm vendor google golang org grpc server go plugin terraform provider azurerm exe created by google golang org grpc server servestreams plugin terraform provider azurerm exe opt teamcity agent work src github com terraform providers terraform provider azurerm vendor google golang org grpc server go azurerm sql database grvrms apply errored but we re indicating that via the error pointer rather than returning it rpc error code unavailable desc transport is closing eval terraform evalmaybetainted evalmaybetainted azurerm sql database grvrms encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm sql database grvrms eval terraform evalapplyprovisioners evalapplyprovisioners azurerm sql database grvrms has no state so skipping provisioners eval terraform evalmaybetainted evalmaybetainted azurerm sql database grvrms encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm sql database grvrms eval terraform evalif eval terraform evalif eval terraform evalwritediff eval terraform evalapplypost eval terraform evalapplypost err rpc error code unavailable desc transport is closing eval terraform evalsequence err rpc error code unavailable desc transport is closing exiting eval tree azurerm sql database grvrms vertex azurerm sql database grvrms visit complete azurerm sql database grvrms apply errored but we re indicating that via the error pointer rather than returning it rpc error code unavailable desc transport is closing eval terraform evalmaybetainted evalmaybetainted azurerm sql database grvrms encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm sql database grvrms eval terraform evalapplyprovisioners evalapplyprovisioners azurerm sql database grvrms has no state so skipping provisioners eval terraform evalmaybetainted evalmaybetainted azurerm sql database grvrms encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm sql database grvrms eval terraform evalif eval terraform evalif eval terraform evalwritediff eval terraform evalapplypost eval terraform evalapplypost err rpc error code unavailable desc transport is closing eval terraform evalsequence err rpc error code unavailable desc transport is closing exiting eval tree azurerm sql database grvrms vertex azurerm sql database grvrms visit complete azurerm app service autoconference as apply errored but we re indicating that via the error pointer rather than returning it rpc error code unavailable desc transport is closing eval terraform evalmaybetainted evalmaybetainted azurerm app service autoconference as encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm app service autoconference as eval terraform evalapplyprovisioners evalapplyprovisioners azurerm app service autoconference as has no state so skipping provisioners eval terraform evalmaybetainted evalmaybetainted azurerm app service autoconference as encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm app service autoconference as eval terraform evalif eval terraform evalif eval terraform evalwritediff eval terraform evalapplypost eval terraform evalapplypost err rpc error code unavailable desc transport is closing eval terraform evalsequence err rpc error code unavailable desc transport is closing exiting eval tree azurerm app service autoconference as vertex azurerm app service autoconference as visit complete plugin stdio received eof stopping recv loop err rpc error code unavailable desc transport is closing azurerm sql database asec apply errored but we re indicating that via the error pointer rather than returning it rpc error code unavailable desc transport is closing eval terraform evalmaybetainted evalmaybetainted azurerm sql database asec encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm sql database asec eval terraform evalapplyprovisioners evalapplyprovisioners azurerm sql database asec has no state so skipping provisioners eval terraform evalmaybetainted evalmaybetainted azurerm sql database asec encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm sql database asec eval terraform evalif eval terraform evalif eval terraform evalwritediff eval terraform evalapplypost eval terraform evalapplypost err rpc error code unavailable desc transport is closing eval terraform evalsequence err rpc error code unavailable desc transport is closing exiting eval tree azurerm sql database asec vertex azurerm sql database asec visit complete azurerm sql database asec apply errored but we re indicating that via the error pointer rather than returning it rpc error code unavailable desc transport is closing eval terraform evalmaybetainted evalmaybetainted azurerm sql database asec encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm sql database asec eval terraform evalapplyprovisioners evalapplyprovisioners azurerm sql database asec has no state so skipping provisioners eval terraform evalmaybetainted evalmaybetainted azurerm sql database asec encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm sql database asec eval terraform evalif eval terraform evalif eval terraform evalwritediff eval terraform evalapplypost eval terraform evalapplypost err rpc error code unavailable desc transport is closing eval terraform evalsequence err rpc error code unavailable desc transport is closing exiting eval tree azurerm sql database asec vertex azurerm sql database asec visit complete azurerm app service autoconference as apply errored but we re indicating that via the error pointer rather than returning it rpc error code unavailable desc transport is closing eval terraform evalmaybetainted evalmaybetainted azurerm app service autoconference as encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm app service autoconference as eval terraform evalapplyprovisioners evalapplyprovisioners azurerm app service autoconference as has no state so skipping provisioners eval terraform evalmaybetainted evalmaybetainted azurerm app service autoconference as encountered an error during creation so it is now marked as tainted eval terraform evalwritestate evalwritestate removing state object for azurerm app service autoconference as eval terraform evalif eval terraform evalif eval terraform evalwritediff eval terraform evalapplypost eval terraform evalapplypost err rpc error code unavailable desc transport is closing eval terraform evalsequence err rpc error code unavailable desc transport is closing exiting eval tree azurerm app service autoconference as vertex azurerm app service autoconference as visit complete dag walk upstream of meta count boundary eachmode fixup errored so skipping dag walk upstream of provider close errored so skipping dag walk upstream of root errored so skipping statemgr filesystem have already backed up original terraform tfstate to terraform tfstate backup on a previous write statemgr filesystem state has changed since last snapshot so incrementing serial to statemgr filesystem writing snapshot at terraform tfstate statemgr filesystem removing lock metadata file terraform tfstate lock info statemgr filesystem unlocked by closing terraform tfstate plugin plugin process exited path terraform plugins registry terraform io hashicorp azurerm windows terraform provider azurerm exe pid error exit status plugin plugin exited terraform crash terraform crashed this is always indicative of a bug within terraform a crash log has been placed at crash log relative to your current working directory it would be immensely helpful if you could please report the crash with terraform so that we can fix this when reporting bugs please include your terraform version that information is available on the first line of crash log you can also get it by running terraform version on the command line security warning the crash log file that was created may contain sensitive information that must be redacted before it is safe to share on the issue tracker terraform crash | 0 |
2,120 | 7,236,588,341 | IssuesEvent | 2018-02-13 07:50:04 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Classes should not have too many dependencies (resolved via MEF) | Area: analyzer Area: maintainability feature in progress | Classes should have at most 5 dependencies.
Following shall count as a dependency:
- each parameter of an `[ImportingConstuctor]`
- each property that has either an `[Import]` or an `[ImportMany]` | True | Classes should not have too many dependencies (resolved via MEF) - Classes should have at most 5 dependencies.
Following shall count as a dependency:
- each parameter of an `[ImportingConstuctor]`
- each property that has either an `[Import]` or an `[ImportMany]` | main | classes should not have too many dependencies resolved via mef classes should have at most dependencies following shall count as a dependency each parameter of an each property that has either an or an | 1 |
160,221 | 25,127,149,809 | IssuesEvent | 2022-11-09 12:38:17 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | `ListView` clip shadows weird behavior | framework f: material design f: scrolling has reproducible steps found in release: 1.20 found in release: 1.22 | ## Steps to Reproduce
```
import 'package:flutter/material.dart';
void main() {
runApp(MaterialApp(
debugShowCheckedModeBanner: false,
home: Home(),
));
}
class Home extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(),
body: SingleChildScrollView(
padding: EdgeInsets.only(top: 16),
child: Column(
children: [
Slot(1),
Slot(5),
Slot(2), // if you increase number of items so they will have scroll, it clips shadow at the top
],
),
),
);
}
}
class Slot extends StatelessWidget {
final int qty;
Slot(this.qty);
@override
Widget build(BuildContext context) {
return SizedBox(
height: 180,
child: ListView.separated(
padding: const EdgeInsets.symmetric(horizontal: 16),
itemCount: qty,
scrollDirection: Axis.horizontal,
separatorBuilder: (_, __) => const SizedBox(width: 16),
itemBuilder: (context, index) => Item(),
),
);
}
}
class Item extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Column(
children: [
Container(
width: 160,
height: 160,
decoration: BoxDecoration(color: Colors.cyan, boxShadow: [BoxShadow(color: const Color(0xf2000000), blurRadius: 8)]),
),
Text('text'),
],
);
}
}
```
There is vertical scroll container (whatever CustomScrollView, SingleChildScrollView). It contains two or more horizontal scroll containers. The main issue is if horizontal scroll container smaller than screen width it doesn't clip shadows.
**important**: I know how to prevent clipping shadows, so I don't need advices like "you can add margins/paddings to container". It's really inconvenient way.
I just want to know what's going on and why if Slot widget bigger that screen it clips shadows. It's weird.
```
[✓] Flutter (Channel stable, 1.20.3, on Mac OS X 10.15.6 19G2021, locale en-GB)
• Flutter version 1.20.3 at /Users/eugene/flutter
• Framework revision 216dee60c0 (7 days ago), 2020-09-01 12:24:47 -0700
• Engine revision d1bc06f032
• Dart version 2.9.2
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at /Users/eugene/Library/Android/sdk
• Platform android-30, build-tools 29.0.2
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 11.7)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.7, Build version 11E801a
• CocoaPods version 1.9.1
[✓] Android Studio (version 4.0)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 49.0.2
• Dart plugin version 193.7547
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
[✓] Connected device (1 available)
• Android SDK built for x86 (mobile) • emulator-5554 • android-x86 • Android 10 (API 29) (emulator)
• No issues found!
```

| 1.0 | `ListView` clip shadows weird behavior - ## Steps to Reproduce
```
import 'package:flutter/material.dart';
void main() {
runApp(MaterialApp(
debugShowCheckedModeBanner: false,
home: Home(),
));
}
class Home extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(),
body: SingleChildScrollView(
padding: EdgeInsets.only(top: 16),
child: Column(
children: [
Slot(1),
Slot(5),
Slot(2), // if you increase number of items so they will have scroll, it clips shadow at the top
],
),
),
);
}
}
class Slot extends StatelessWidget {
final int qty;
Slot(this.qty);
@override
Widget build(BuildContext context) {
return SizedBox(
height: 180,
child: ListView.separated(
padding: const EdgeInsets.symmetric(horizontal: 16),
itemCount: qty,
scrollDirection: Axis.horizontal,
separatorBuilder: (_, __) => const SizedBox(width: 16),
itemBuilder: (context, index) => Item(),
),
);
}
}
class Item extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Column(
children: [
Container(
width: 160,
height: 160,
decoration: BoxDecoration(color: Colors.cyan, boxShadow: [BoxShadow(color: const Color(0xf2000000), blurRadius: 8)]),
),
Text('text'),
],
);
}
}
```
There is vertical scroll container (whatever CustomScrollView, SingleChildScrollView). It contains two or more horizontal scroll containers. The main issue is if horizontal scroll container smaller than screen width it doesn't clip shadows.
**important**: I know how to prevent clipping shadows, so I don't need advices like "you can add margins/paddings to container". It's really inconvenient way.
I just want to know what's going on and why if Slot widget bigger that screen it clips shadows. It's weird.
```
[✓] Flutter (Channel stable, 1.20.3, on Mac OS X 10.15.6 19G2021, locale en-GB)
• Flutter version 1.20.3 at /Users/eugene/flutter
• Framework revision 216dee60c0 (7 days ago), 2020-09-01 12:24:47 -0700
• Engine revision d1bc06f032
• Dart version 2.9.2
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
• Android SDK at /Users/eugene/Library/Android/sdk
• Platform android-30, build-tools 29.0.2
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 11.7)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 11.7, Build version 11E801a
• CocoaPods version 1.9.1
[✓] Android Studio (version 4.0)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 49.0.2
• Dart plugin version 193.7547
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6222593)
[✓] Connected device (1 available)
• Android SDK built for x86 (mobile) • emulator-5554 • android-x86 • Android 10 (API 29) (emulator)
• No issues found!
```

| non_main | listview clip shadows weird behavior steps to reproduce import package flutter material dart void main runapp materialapp debugshowcheckedmodebanner false home home class home extends statelesswidget override widget build buildcontext context return scaffold appbar appbar body singlechildscrollview padding edgeinsets only top child column children slot slot slot if you increase number of items so they will have scroll it clips shadow at the top class slot extends statelesswidget final int qty slot this qty override widget build buildcontext context return sizedbox height child listview separated padding const edgeinsets symmetric horizontal itemcount qty scrolldirection axis horizontal separatorbuilder const sizedbox width itembuilder context index item class item extends statelesswidget override widget build buildcontext context return column children container width height decoration boxdecoration color colors cyan boxshadow text text there is vertical scroll container whatever customscrollview singlechildscrollview it contains two or more horizontal scroll containers the main issue is if horizontal scroll container smaller than screen width it doesn t clip shadows important i know how to prevent clipping shadows so i don t need advices like you can add margins paddings to container it s really inconvenient way i just want to know what s going on and why if slot widget bigger that screen it clips shadows it s weird flutter channel stable on mac os x locale en gb • flutter version at users eugene flutter • framework revision days ago • engine revision • dart version android toolchain develop for android devices android sdk version • android sdk at users eugene library android sdk • platform android build tools • java binary at applications android studio app contents jre jdk contents home bin java • java version openjdk runtime environment build release • all android licenses accepted xcode develop for ios and macos xcode • xcode at applications xcode app contents developer • xcode build version • cocoapods version android studio version • android studio at applications android studio app contents • flutter plugin version • dart plugin version • java version openjdk runtime environment build release connected device available • android sdk built for mobile • emulator • android • android api emulator • no issues found | 0 |
656 | 4,172,549,850 | IssuesEvent | 2016-06-21 07:02:06 | Particular/ServiceControl | https://api.github.com/repos/Particular/ServiceControl | closed | Add option to run in maintenance mode to the Management Utility | Tag: Maintainer Prio Type: Improvement | 
This is ideal for compacting the database that in Raven v3 is part of the studio | True | Add option to run in maintenance mode to the Management Utility - 
This is ideal for compacting the database that in Raven v3 is part of the studio | main | add option to run in maintenance mode to the management utility this is ideal for compacting the database that in raven is part of the studio | 1 |
1,611 | 6,572,632,238 | IssuesEvent | 2017-09-11 03:55:22 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Slack: Broken links with anchor text | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
Slack module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
N/A
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
[This PR](https://github.com/ansible/ansible-modules-extras/pull/3032#issuecomment-253554299) sending URLs with defined anchors to Slack broke.
The format for specifying a URL with an anchor on Slack, according to their[ message formatting doc](https://api.slack.com/docs/message-formatting#linking_to_urls), is the following:
```
<https://google.com|anchor text>
```
During my investigation of this issue I've found their [online message builder](https://api.slack.com/docs/messages/builder?msg=%7B%22text%22%3A%22%60%3Chttps%3A%2F%2Fgoogle.com%7Canchor%20text%3E%60%20Foo%20bar%20baz%22%7D) really useful
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
``` yaml
- name: Sending slack notification {{ message }}
local_action:
module: slack
channel: "{{ slack_channel }}"
icon_url: "{{ slack_image }}"
msg: "`<https://google.com|anchor text>` Foo bar baz"
username: "{{ slack_name }}"
token: "{{ slack_token }}"
```
Note: Using color good/warning/danger or normal produce the exact same output, that means that this bug affects to basic formatting and attachments
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
<img width="176" alt="screen shot 2016-10-13 at 11 18 29 am" src="https://cloud.githubusercontent.com/assets/50509/19361501/bad5faf0-9137-11e6-9a55-e3d3c25be836.png">
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<img width="331" alt="screen shot 2016-10-13 at 11 17 38 am" src="https://cloud.githubusercontent.com/assets/50509/19361480/ad1dbec0-9137-11e6-9883-9f3c442c21a9.png">
| True | Slack: Broken links with anchor text - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
Slack module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
N/A
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
[This PR](https://github.com/ansible/ansible-modules-extras/pull/3032#issuecomment-253554299) sending URLs with defined anchors to Slack broke.
The format for specifying a URL with an anchor on Slack, according to their[ message formatting doc](https://api.slack.com/docs/message-formatting#linking_to_urls), is the following:
```
<https://google.com|anchor text>
```
During my investigation of this issue I've found their [online message builder](https://api.slack.com/docs/messages/builder?msg=%7B%22text%22%3A%22%60%3Chttps%3A%2F%2Fgoogle.com%7Canchor%20text%3E%60%20Foo%20bar%20baz%22%7D) really useful
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
``` yaml
- name: Sending slack notification {{ message }}
local_action:
module: slack
channel: "{{ slack_channel }}"
icon_url: "{{ slack_image }}"
msg: "`<https://google.com|anchor text>` Foo bar baz"
username: "{{ slack_name }}"
token: "{{ slack_token }}"
```
Note: Using color good/warning/danger or normal produce the exact same output, that means that this bug affects to basic formatting and attachments
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
<img width="176" alt="screen shot 2016-10-13 at 11 18 29 am" src="https://cloud.githubusercontent.com/assets/50509/19361501/bad5faf0-9137-11e6-9a55-e3d3c25be836.png">
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<img width="331" alt="screen shot 2016-10-13 at 11 17 38 am" src="https://cloud.githubusercontent.com/assets/50509/19361480/ad1dbec0-9137-11e6-9883-9f3c442c21a9.png">
| main | slack broken links with anchor text issue type bug report component name slack module ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary sending urls with defined anchors to slack broke the format for specifying a url with an anchor on slack according to their is the following during my investigation of this issue i ve found their really useful steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used yaml name sending slack notification message local action module slack channel slack channel icon url slack image msg foo bar baz username slack name token slack token note using color good warning danger or normal produce the exact same output that means that this bug affects to basic formatting and attachments expected results img width alt screen shot at am src actual results img width alt screen shot at am src | 1 |
4,474 | 23,339,149,220 | IssuesEvent | 2022-08-09 12:42:28 | 13hannes11/toolbx-tuner | https://api.github.com/repos/13hannes11/toolbx-tuner | opened | Automatically vendor dependencies in CI | maintainance | Vendoring dependencies and uploading them in releases makes publishing on Flathub easier. | True | Automatically vendor dependencies in CI - Vendoring dependencies and uploading them in releases makes publishing on Flathub easier. | main | automatically vendor dependencies in ci vendoring dependencies and uploading them in releases makes publishing on flathub easier | 1 |
5,534 | 27,696,600,582 | IssuesEvent | 2023-03-14 03:03:12 | microsoft/DirectXTK | https://api.github.com/repos/microsoft/DirectXTK | opened | Make use of C++/WinRT when building for C++17 | maintainence | Currently I make use of WRL to directly interface with the "ABI" namespace versions of Windows Runtime APIs for audio device enumeration, GamePad, Keyboard, and Mouse implementations for UWP. I should make the code use C++/WinRT projections instead when building for ``/std:c++17 /Zc:__cplusplus``. | True | Make use of C++/WinRT when building for C++17 - Currently I make use of WRL to directly interface with the "ABI" namespace versions of Windows Runtime APIs for audio device enumeration, GamePad, Keyboard, and Mouse implementations for UWP. I should make the code use C++/WinRT projections instead when building for ``/std:c++17 /Zc:__cplusplus``. | main | make use of c winrt when building for c currently i make use of wrl to directly interface with the abi namespace versions of windows runtime apis for audio device enumeration gamepad keyboard and mouse implementations for uwp i should make the code use c winrt projections instead when building for std c zc cplusplus | 1 |
30,857 | 13,348,183,350 | IssuesEvent | 2020-08-29 17:13:27 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | closed | az functionapp crashes in WSL2 | Functions Service Attention | ## Describe the bug
Using Ubuntu-20.04 in WSL2. After installing azure-cli with
```
sudo apt install azure-cli -y
```
I'm able to login, create resource groups, create storage accounts etc. However, the az functionapp subcommands seems to be broken. az feedback --verbose gave the following output:
**Command Name**
`az functionapp create`
**Errors:**
```
The command failed with an unexpected error. Here is the traceback:
No module named 'decorator'
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/fabric/connection.py", line 5, in <module>
from invoke.vendor.six import StringIO
ModuleNotFoundError: No module named 'invoke.vendor.six'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/knack/cli.py", line 206, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/lib/python3/dist-packages/azure/cli/core/commands/__init__.py", line 528, in execute
self.commands_loader.load_arguments(command)
File "/usr/lib/python3/dist-packages/azure/cli/core/__init__.py", line 299, in load_arguments
self.command_table[command].load_arguments() # this loads the arguments via reflection
File "/usr/lib/python3/dist-packages/azure/cli/core/commands/__init__.py", line 291, in load_arguments
super(AzCliCommand, self).load_arguments()
File "/usr/lib/python3/dist-packages/knack/commands.py", line 97, in load_arguments
cmd_args = self.arguments_loader()
File "/usr/lib/python3/dist-packages/azure/cli/core/__init__.py", line 496, in default_arguments_loader
op = handler or self.get_op_handler(operation, operation_group=kwargs.get('operation_group'))
File "/usr/lib/python3/dist-packages/azure/cli/core/__init__.py", line 536, in get_op_handler
op = import_module(mod_to_import)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/lib/python3/dist-packages/azure/cli/command_modules/appservice/custom.py", line 25, in <module>
from fabric import Connection
File "/usr/lib/python3/dist-packages/fabric/__init__.py", line 3, in <module>
from .connection import Config, Connection
File "/usr/lib/python3/dist-packages/fabric/connection.py", line 10, in <module>
from decorator import decorator
ModuleNotFoundError: No module named 'decorator'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az functionapp create -n {} -g {} --storage-account {} --app-insights {} --compumption-plan-location {} --runtime {}`
## Expected Behavior
I expected a function app to be created, but had a crash happen instead.
## Environment Summary
```
Linux-4.19.104-microsoft-standard-x86_64-with-glibc2.29
Python 3.8.2
Shell: bash
azure-cli 2.0.81
Extensions:
azure-devops 0.17.0
``` | 1.0 | az functionapp crashes in WSL2 - ## Describe the bug
Using Ubuntu-20.04 in WSL2. After installing azure-cli with
```
sudo apt install azure-cli -y
```
I'm able to login, create resource groups, create storage accounts etc. However, the az functionapp subcommands seems to be broken. az feedback --verbose gave the following output:
**Command Name**
`az functionapp create`
**Errors:**
```
The command failed with an unexpected error. Here is the traceback:
No module named 'decorator'
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/fabric/connection.py", line 5, in <module>
from invoke.vendor.six import StringIO
ModuleNotFoundError: No module named 'invoke.vendor.six'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/knack/cli.py", line 206, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/lib/python3/dist-packages/azure/cli/core/commands/__init__.py", line 528, in execute
self.commands_loader.load_arguments(command)
File "/usr/lib/python3/dist-packages/azure/cli/core/__init__.py", line 299, in load_arguments
self.command_table[command].load_arguments() # this loads the arguments via reflection
File "/usr/lib/python3/dist-packages/azure/cli/core/commands/__init__.py", line 291, in load_arguments
super(AzCliCommand, self).load_arguments()
File "/usr/lib/python3/dist-packages/knack/commands.py", line 97, in load_arguments
cmd_args = self.arguments_loader()
File "/usr/lib/python3/dist-packages/azure/cli/core/__init__.py", line 496, in default_arguments_loader
op = handler or self.get_op_handler(operation, operation_group=kwargs.get('operation_group'))
File "/usr/lib/python3/dist-packages/azure/cli/core/__init__.py", line 536, in get_op_handler
op = import_module(mod_to_import)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/lib/python3/dist-packages/azure/cli/command_modules/appservice/custom.py", line 25, in <module>
from fabric import Connection
File "/usr/lib/python3/dist-packages/fabric/__init__.py", line 3, in <module>
from .connection import Config, Connection
File "/usr/lib/python3/dist-packages/fabric/connection.py", line 10, in <module>
from decorator import decorator
ModuleNotFoundError: No module named 'decorator'
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az functionapp create -n {} -g {} --storage-account {} --app-insights {} --compumption-plan-location {} --runtime {}`
## Expected Behavior
I expected a function app to be created, but had a crash happen instead.
## Environment Summary
```
Linux-4.19.104-microsoft-standard-x86_64-with-glibc2.29
Python 3.8.2
Shell: bash
azure-cli 2.0.81
Extensions:
azure-devops 0.17.0
``` | non_main | az functionapp crashes in describe the bug using ubuntu in after installing azure cli with sudo apt install azure cli y i m able to login create resource groups create storage accounts etc however the az functionapp subcommands seems to be broken az feedback verbose gave the following output command name az functionapp create errors the command failed with an unexpected error here is the traceback no module named decorator traceback most recent call last file usr lib dist packages fabric connection py line in from invoke vendor six import stringio modulenotfounderror no module named invoke vendor six during handling of the above exception another exception occurred traceback most recent call last file usr lib dist packages knack cli py line in invoke cmd result self invocation execute args file usr lib dist packages azure cli core commands init py line in execute self commands loader load arguments command file usr lib dist packages azure cli core init py line in load arguments self command table load arguments this loads the arguments via reflection file usr lib dist packages azure cli core commands init py line in load arguments super azclicommand self load arguments file usr lib dist packages knack commands py line in load arguments cmd args self arguments loader file usr lib dist packages azure cli core init py line in default arguments loader op handler or self get op handler operation operation group kwargs get operation group file usr lib dist packages azure cli core init py line in get op handler op import module mod to import file usr lib importlib init py line in import module return bootstrap gcd import name package level file line in gcd import file line in find and load file line in find and load unlocked file line in load unlocked file line in exec module file line in call with frames removed file usr lib dist packages azure cli command modules appservice custom py line in from fabric import connection file usr lib dist packages fabric init py line in from connection import config connection file usr lib dist packages fabric connection py line in from decorator import decorator modulenotfounderror no module named decorator to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information put any pre requisite steps here az functionapp create n g storage account app insights compumption plan location runtime expected behavior i expected a function app to be created but had a crash happen instead environment summary linux microsoft standard with python shell bash azure cli extensions azure devops | 0 |
2,912 | 10,369,433,780 | IssuesEvent | 2019-09-08 03:00:44 | luckyariane/arthas-bot | https://api.github.com/repos/luckyariane/arthas-bot | closed | Implement new currency DB | maintain | After 'upgrading' from AnkhBotR2 to Streamlabs chatbot the currency DB is no longer an easily accessible SQLite DB. I will need to implement my own.
Much of my custom bot functionality depended on access to this DB so until this is done custom functionality is limited. | True | Implement new currency DB - After 'upgrading' from AnkhBotR2 to Streamlabs chatbot the currency DB is no longer an easily accessible SQLite DB. I will need to implement my own.
Much of my custom bot functionality depended on access to this DB so until this is done custom functionality is limited. | main | implement new currency db after upgrading from to streamlabs chatbot the currency db is no longer an easily accessible sqlite db i will need to implement my own much of my custom bot functionality depended on access to this db so until this is done custom functionality is limited | 1 |
5,233 | 26,534,840,173 | IssuesEvent | 2023-01-19 14:58:56 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | Upgrade `wagtail-localize-git` to 0.13. | engineering maintain | ## Description
To unblock upgrade of Wagtail to version 3.0 (#9674) we need to upgrade `wagtail-localize-git` to version 0.13.0.
See also: https://github.com/wagtail/wagtail-localize-git/releases/tag/v.0.13.0
## Acceptance criteria
- [x] `wagtail-localize-git` is upgraded to version 0.13.0 | True | Upgrade `wagtail-localize-git` to 0.13. - ## Description
To unblock upgrade of Wagtail to version 3.0 (#9674) we need to upgrade `wagtail-localize-git` to version 0.13.0.
See also: https://github.com/wagtail/wagtail-localize-git/releases/tag/v.0.13.0
## Acceptance criteria
- [x] `wagtail-localize-git` is upgraded to version 0.13.0 | main | upgrade wagtail localize git to description to unblock upgrade of wagtail to version we need to upgrade wagtail localize git to version see also acceptance criteria wagtail localize git is upgraded to version | 1 |
57,563 | 15,862,769,901 | IssuesEvent | 2021-04-08 12:02:02 | galasa-dev/projectmanagement | https://api.github.com/repos/galasa-dev/projectmanagement | opened | Firefox layout defects | defect webui | In Firefox (86.0.1 (64-bit) on Mac Big Sur), the slideout hamburger menu is placed too high

The same issue does **not** occur with the `organise table`, `Filter`, `Work list`, `Compare list` or `Help` flyouts. | 1.0 | Firefox layout defects - In Firefox (86.0.1 (64-bit) on Mac Big Sur), the slideout hamburger menu is placed too high

The same issue does **not** occur with the `organise table`, `Filter`, `Work list`, `Compare list` or `Help` flyouts. | non_main | firefox layout defects in firefox bit on mac big sur the slideout hamburger menu is placed too high the same issue does not occur with the organise table filter work list compare list or help flyouts | 0 |
4,301 | 21,673,088,019 | IssuesEvent | 2022-05-08 09:15:33 | Numble-challenge-Team/client | https://api.github.com/repos/Numble-challenge-Team/client | closed | 공통 컴포넌트 작업 | feat style maintain | ### ISSUE
- Type: feature
- Page: -
### TODO
- [x] 인풋 컴포넌트
- [x] 버튼 컴포넌트
- [x] 텍스트 컴포넌트
- [x] 타이틀 컴포넌트
- [x] 비디오 태그 컴포넌트 | True | 공통 컴포넌트 작업 - ### ISSUE
- Type: feature
- Page: -
### TODO
- [x] 인풋 컴포넌트
- [x] 버튼 컴포넌트
- [x] 텍스트 컴포넌트
- [x] 타이틀 컴포넌트
- [x] 비디오 태그 컴포넌트 | main | 공통 컴포넌트 작업 issue type feature page todo 인풋 컴포넌트 버튼 컴포넌트 텍스트 컴포넌트 타이틀 컴포넌트 비디오 태그 컴포넌트 | 1 |
5,428 | 27,237,804,946 | IssuesEvent | 2023-02-21 17:38:05 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Users & Permissions frontend | work: frontend status: ready restricted: maintainers type: meta | Refer RFC for implementation details
- [PR link](https://github.com/centerofci/mathesar-wiki/pull/75)
- Wiki link to be updated after RFC is merged
### Users
- [x] Base API utility for users - @pavish
- [x] Stores needed for user profile - @pavish
- [x] User profile route & page - @pavish
- [x] Administration route & pages - @pavish
- [x] Listing route functionality - @pavish
- [x] Add/Edit route functionality - @pavish
- [x] Style admin page - @rajatvijay
- [x] Style user listing page - @rajatvijay
- [x] #2430 @seancolsen
- [x] #2428 @seancolsen
- [x] #2431 @seancolsen
- [x] #2027 - @pavish
- [x] #2448 @seancolsen
- [x] #2404 - @pavish
### Permissions
- [x] #2470 - @pavish
- [x] #2471 - @pavish
- [x] #2472 - @pavish
- [x] #2473 - @pavish
- [x] #2474 - @pavish
- Admin route
- Import upload and preview routes
- Record page route
- Exploration edit route
- [x] #2475 - @pavish
- [x] #2446 - @seancolsen
- [x] #2476 - @rajatvijay
- Editing the spreadsheet
- Adding new columns
- Operations in right-click context menus
- [x] #2477 - @rajatvijay
- [x] #2478 - @pavish
- [x] #2447 - @seancolsen
- [x] #2479 - @pavish
- [x] #2480 - @pavish
Blocked by:
- https://github.com/centerofci/mathesar/issues/2311
- https://github.com/centerofci/mathesar/issues/2323
- https://github.com/centerofci/mathesar/issues/2321
- https://github.com/centerofci/mathesar/issues/2392
Non-blocking backend work:
- https://github.com/centerofci/mathesar/issues/2312
See also:
- #1673
- #1985 | True | Users & Permissions frontend - Refer RFC for implementation details
- [PR link](https://github.com/centerofci/mathesar-wiki/pull/75)
- Wiki link to be updated after RFC is merged
### Users
- [x] Base API utility for users - @pavish
- [x] Stores needed for user profile - @pavish
- [x] User profile route & page - @pavish
- [x] Administration route & pages - @pavish
- [x] Listing route functionality - @pavish
- [x] Add/Edit route functionality - @pavish
- [x] Style admin page - @rajatvijay
- [x] Style user listing page - @rajatvijay
- [x] #2430 @seancolsen
- [x] #2428 @seancolsen
- [x] #2431 @seancolsen
- [x] #2027 - @pavish
- [x] #2448 @seancolsen
- [x] #2404 - @pavish
### Permissions
- [x] #2470 - @pavish
- [x] #2471 - @pavish
- [x] #2472 - @pavish
- [x] #2473 - @pavish
- [x] #2474 - @pavish
- Admin route
- Import upload and preview routes
- Record page route
- Exploration edit route
- [x] #2475 - @pavish
- [x] #2446 - @seancolsen
- [x] #2476 - @rajatvijay
- Editing the spreadsheet
- Adding new columns
- Operations in right-click context menus
- [x] #2477 - @rajatvijay
- [x] #2478 - @pavish
- [x] #2447 - @seancolsen
- [x] #2479 - @pavish
- [x] #2480 - @pavish
Blocked by:
- https://github.com/centerofci/mathesar/issues/2311
- https://github.com/centerofci/mathesar/issues/2323
- https://github.com/centerofci/mathesar/issues/2321
- https://github.com/centerofci/mathesar/issues/2392
Non-blocking backend work:
- https://github.com/centerofci/mathesar/issues/2312
See also:
- #1673
- #1985 | main | users permissions frontend refer rfc for implementation details wiki link to be updated after rfc is merged users base api utility for users pavish stores needed for user profile pavish user profile route page pavish administration route pages pavish listing route functionality pavish add edit route functionality pavish style admin page rajatvijay style user listing page rajatvijay seancolsen seancolsen seancolsen pavish seancolsen pavish permissions pavish pavish pavish pavish pavish admin route import upload and preview routes record page route exploration edit route pavish seancolsen rajatvijay editing the spreadsheet adding new columns operations in right click context menus rajatvijay pavish seancolsen pavish pavish blocked by non blocking backend work see also | 1 |
4,764 | 24,536,610,018 | IssuesEvent | 2022-10-11 21:25:14 | chocolatey-community/chocolatey-package-requests | https://api.github.com/repos/chocolatey-community/chocolatey-package-requests | closed | RFP - monitorian | Status: Available For Maintainer(s) | <!--
* Please ensure the package does not already exist in the Chocolatey Community Repository - https://chocolatey.org/packages - by using a relevant search.
* Please ensure there is no existing open package request.
* Please ensure the issue title starts with 'RFP - ' - for example 'RFP - adobe-reader'
* Please also ensure the issue title matches the identifier you expect the package should be named.
* Please ensure you have both the Software Project URL and the Software Download URL before continuing.
NOTE: Keep in mind we have an etiquette regarding communication that we expect folks to observe when they are looking for support in the Chocolatey community - https://github.com/chocolatey/chocolatey-package-requests/blob/master/README.md#etiquette-regarding-communication
PLEASE REMOVE ALL COMMENTS ONCE YOU HAVE READ THEM.
-->
## Checklist
- [x] The package I am requesting does not already exist on https://chocolatey.org/packages;
- [x] There is no open issue for this package;
- [x] The issue title starts with 'RFP - ';
- [x] The download URL is public and not locked behind a paywall / login;
## Package Details
Software project URL : https://github.com/emoacht/Monitorian
Direct download URL for the software / installer : https://github.com/emoacht/Monitorian/releases/download/3.4.0-Installer/MonitorianInstaller340.zip
Software summary / short description: Monitorian is a Windows desktop tool to adjust the brightness of multiple monitors with ease.
<!-- ## Package Expectations
Here you can make suggestions on what you would expect the package to do outside of 'installing' - eg. adding icons to the desktop
-->
The package should be installed and then it runs in background, an icon is visible in taskbar notification area that can be clicked to adjust the monitors (all the attached monitors) brightness simply using a handler for each attached monitor. | True | RFP - monitorian - <!--
* Please ensure the package does not already exist in the Chocolatey Community Repository - https://chocolatey.org/packages - by using a relevant search.
* Please ensure there is no existing open package request.
* Please ensure the issue title starts with 'RFP - ' - for example 'RFP - adobe-reader'
* Please also ensure the issue title matches the identifier you expect the package should be named.
* Please ensure you have both the Software Project URL and the Software Download URL before continuing.
NOTE: Keep in mind we have an etiquette regarding communication that we expect folks to observe when they are looking for support in the Chocolatey community - https://github.com/chocolatey/chocolatey-package-requests/blob/master/README.md#etiquette-regarding-communication
PLEASE REMOVE ALL COMMENTS ONCE YOU HAVE READ THEM.
-->
## Checklist
- [x] The package I am requesting does not already exist on https://chocolatey.org/packages;
- [x] There is no open issue for this package;
- [x] The issue title starts with 'RFP - ';
- [x] The download URL is public and not locked behind a paywall / login;
## Package Details
Software project URL : https://github.com/emoacht/Monitorian
Direct download URL for the software / installer : https://github.com/emoacht/Monitorian/releases/download/3.4.0-Installer/MonitorianInstaller340.zip
Software summary / short description: Monitorian is a Windows desktop tool to adjust the brightness of multiple monitors with ease.
<!-- ## Package Expectations
Here you can make suggestions on what you would expect the package to do outside of 'installing' - eg. adding icons to the desktop
-->
The package should be installed and then it runs in background, an icon is visible in taskbar notification area that can be clicked to adjust the monitors (all the attached monitors) brightness simply using a handler for each attached monitor. | main | rfp monitorian please ensure the package does not already exist in the chocolatey community repository by using a relevant search please ensure there is no existing open package request please ensure the issue title starts with rfp for example rfp adobe reader please also ensure the issue title matches the identifier you expect the package should be named please ensure you have both the software project url and the software download url before continuing note keep in mind we have an etiquette regarding communication that we expect folks to observe when they are looking for support in the chocolatey community please remove all comments once you have read them checklist the package i am requesting does not already exist on there is no open issue for this package the issue title starts with rfp the download url is public and not locked behind a paywall login package details software project url direct download url for the software installer software summary short description monitorian is a windows desktop tool to adjust the brightness of multiple monitors with ease package expectations here you can make suggestions on what you would expect the package to do outside of installing eg adding icons to the desktop the package should be installed and then it runs in background an icon is visible in taskbar notification area that can be clicked to adjust the monitors all the attached monitors brightness simply using a handler for each attached monitor | 1 |
38,399 | 4,954,827,765 | IssuesEvent | 2016-12-01 18:42:40 | map-egypt/map-egypt.github.io | https://api.github.com/repos/map-egypt/map-egypt.github.io | closed | Adjust width of buttons | design tweaks in progress | On the comps the buttons are a little wider which I think was a pretty nice design choice. I would add this into the build.
Also make sure that the notifications button in the footer is slightly wider and also that it has the same rollover pattern that the other buttons have. | 1.0 | Adjust width of buttons - On the comps the buttons are a little wider which I think was a pretty nice design choice. I would add this into the build.
Also make sure that the notifications button in the footer is slightly wider and also that it has the same rollover pattern that the other buttons have. | non_main | adjust width of buttons on the comps the buttons are a little wider which i think was a pretty nice design choice i would add this into the build also make sure that the notifications button in the footer is slightly wider and also that it has the same rollover pattern that the other buttons have | 0 |
1,826 | 6,577,335,615 | IssuesEvent | 2017-09-12 00:11:35 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | lineinfile with regexp writes line when there isn't a match | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lineinfile
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Bone stock
##### OS / ENVIRONMENT
CentOS 7
##### SUMMARY
When lineinfile with regexp finds a match, it substitutes properly. When it doesn't find a match (say, on subsequent runs), it just dumps the specified line at the bottom of the file as if you hadn't specified a regexp. Needless to say, this breaks idempotence.
##### STEPS TO REPRODUCE
1. Take a look at a file.
2. Run lineinfile with regexp that matches a line
3. See that your line was in fact replaced.
4. Run lineinfile again.
5. See that the specified replacement line is now duplicated at the bottom of the file.
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Add ops scripts to sudo secure_path
lineinfile:
dest: /etc/sudoers
regexp: >
^Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin$
line: 'Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin'
validate: visudo -cf %s
```
##### EXPECTED RESULTS
Idempotent line replacement after a second run
```
# Refuse to run if unable to disable echo on the tty.
Defaults !visiblepw
#
# Preserving HOME has security implications since many programs
# use it when searching for configuration files. Note that HOME
# is already set when the the env_reset option is enabled, so
# this option is only effective for configurations where either
# env_reset is disabled or HOME is present in the env_keep list.
#
Defaults always_set_home
Defaults env_reset
Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS"
Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
Defaults env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"
Defaults env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"
Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"
#
# Adding HOME to env_keep may enable a user to run unrestricted
# commands via sudo.
#
# Defaults env_keep += "HOME"
Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin
## Next comes the main part: which users can run what software on
## which machines (the sudoers file can be shared between multiple
## systems).
## Syntax:
##
## user MACHINE=COMMANDS
##
## The COMMANDS section may have other options added to it.
##
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
## Allows people in group wheel to run all commands without a password
%wheel ALL=(ALL) NOPASSWD: ALL
```
##### ACTUAL RESULTS
Potentially show-stopping garbage at the bottom of a file
```
# Refuse to run if unable to disable echo on the tty.
Defaults !visiblepw
#
# Preserving HOME has security implications since many programs
# use it when searching for configuration files. Note that HOME
# is already set when the the env_reset option is enabled, so
# this option is only effective for configurations where either
# env_reset is disabled or HOME is present in the env_keep list.
#
Defaults always_set_home
Defaults env_reset
Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS"
Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
Defaults env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"
Defaults env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"
Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"
#
# Adding HOME to env_keep may enable a user to run unrestricted
# commands via sudo.
#
# Defaults env_keep += "HOME"
Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin
## Next comes the main part: which users can run what software on
## which machines (the sudoers file can be shared between multiple
## systems).
## Syntax:
##
## user MACHINE=COMMANDS
##
## The COMMANDS section may have other options added to it.
##
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
## Allows people in group wheel to run all commands without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin
```
| True | lineinfile with regexp writes line when there isn't a match - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lineinfile
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Bone stock
##### OS / ENVIRONMENT
CentOS 7
##### SUMMARY
When lineinfile with regexp finds a match, it substitutes properly. When it doesn't find a match (say, on subsequent runs), it just dumps the specified line at the bottom of the file as if you hadn't specified a regexp. Needless to say, this breaks idempotence.
##### STEPS TO REPRODUCE
1. Take a look at a file.
2. Run lineinfile with regexp that matches a line
3. See that your line was in fact replaced.
4. Run lineinfile again.
5. See that the specified replacement line is now duplicated at the bottom of the file.
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Add ops scripts to sudo secure_path
lineinfile:
dest: /etc/sudoers
regexp: >
^Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin$
line: 'Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin'
validate: visudo -cf %s
```
##### EXPECTED RESULTS
Idempotent line replacement after a second run
```
# Refuse to run if unable to disable echo on the tty.
Defaults !visiblepw
#
# Preserving HOME has security implications since many programs
# use it when searching for configuration files. Note that HOME
# is already set when the the env_reset option is enabled, so
# this option is only effective for configurations where either
# env_reset is disabled or HOME is present in the env_keep list.
#
Defaults always_set_home
Defaults env_reset
Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS"
Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
Defaults env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"
Defaults env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"
Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"
#
# Adding HOME to env_keep may enable a user to run unrestricted
# commands via sudo.
#
# Defaults env_keep += "HOME"
Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin
## Next comes the main part: which users can run what software on
## which machines (the sudoers file can be shared between multiple
## systems).
## Syntax:
##
## user MACHINE=COMMANDS
##
## The COMMANDS section may have other options added to it.
##
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
## Allows people in group wheel to run all commands without a password
%wheel ALL=(ALL) NOPASSWD: ALL
```
##### ACTUAL RESULTS
Potentially show-stopping garbage at the bottom of a file
```
# Refuse to run if unable to disable echo on the tty.
Defaults !visiblepw
#
# Preserving HOME has security implications since many programs
# use it when searching for configuration files. Note that HOME
# is already set when the the env_reset option is enabled, so
# this option is only effective for configurations where either
# env_reset is disabled or HOME is present in the env_keep list.
#
Defaults always_set_home
Defaults env_reset
Defaults env_keep = "COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS"
Defaults env_keep += "MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE"
Defaults env_keep += "LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES"
Defaults env_keep += "LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE"
Defaults env_keep += "LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY"
#
# Adding HOME to env_keep may enable a user to run unrestricted
# commands via sudo.
#
# Defaults env_keep += "HOME"
Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin
## Next comes the main part: which users can run what software on
## which machines (the sudoers file can be shared between multiple
## systems).
## Syntax:
##
## user MACHINE=COMMANDS
##
## The COMMANDS section may have other options added to it.
##
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
## Allows people in group wheel to run all commands without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Defaults secure_path = /opt/d7/bin:/sbin:/bin:/usr/sbin:/usr/bin
```
| main | lineinfile with regexp writes line when there isn t a match issue type bug report component name lineinfile ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration bone stock os environment centos summary when lineinfile with regexp finds a match it substitutes properly when it doesn t find a match say on subsequent runs it just dumps the specified line at the bottom of the file as if you hadn t specified a regexp needless to say this breaks idempotence steps to reproduce take a look at a file run lineinfile with regexp that matches a line see that your line was in fact replaced run lineinfile again see that the specified replacement line is now duplicated at the bottom of the file name add ops scripts to sudo secure path lineinfile dest etc sudoers regexp defaults secure path sbin bin usr sbin usr bin line defaults secure path opt bin sbin bin usr sbin usr bin validate visudo cf s expected results idempotent line replacement after a second run refuse to run if unable to disable echo on the tty defaults visiblepw preserving home has security implications since many programs use it when searching for configuration files note that home is already set when the the env reset option is enabled so this option is only effective for configurations where either env reset is disabled or home is present in the env keep list defaults always set home defaults env reset defaults env keep colors display hostname histsize inputrc kdedir ls colors defaults env keep mail qtdir username lang lc address lc ctype defaults env keep lc collate lc identification lc measurement lc messages defaults env keep lc monetary lc name lc numeric lc paper lc telephone defaults env keep lc time lc all language linguas xkb charset xauthority adding home to env keep may enable a user to run unrestricted commands via sudo defaults env keep home defaults secure path opt bin sbin bin usr sbin usr bin next comes the main part which users can run what software on which machines the sudoers file can be shared between multiple systems syntax user machine commands the commands section may have other options added to it allow root to run any commands anywhere root all all all allows people in group wheel to run all commands without a password wheel all all nopasswd all actual results potentially show stopping garbage at the bottom of a file refuse to run if unable to disable echo on the tty defaults visiblepw preserving home has security implications since many programs use it when searching for configuration files note that home is already set when the the env reset option is enabled so this option is only effective for configurations where either env reset is disabled or home is present in the env keep list defaults always set home defaults env reset defaults env keep colors display hostname histsize inputrc kdedir ls colors defaults env keep mail qtdir username lang lc address lc ctype defaults env keep lc collate lc identification lc measurement lc messages defaults env keep lc monetary lc name lc numeric lc paper lc telephone defaults env keep lc time lc all language linguas xkb charset xauthority adding home to env keep may enable a user to run unrestricted commands via sudo defaults env keep home defaults secure path opt bin sbin bin usr sbin usr bin next comes the main part which users can run what software on which machines the sudoers file can be shared between multiple systems syntax user machine commands the commands section may have other options added to it allow root to run any commands anywhere root all all all allows people in group wheel to run all commands without a password wheel all all nopasswd all defaults secure path opt bin sbin bin usr sbin usr bin | 1 |
488,976 | 14,100,004,954 | IssuesEvent | 2020-11-06 02:56:45 | hydroshare/hydroshare | https://api.github.com/repos/hydroshare/hydroshare | closed | Discover page info modal does not close when clicking outside the modal | Discover Medium Priority bug | The modal does not close when clicking in another part of the page. There is also no "X" in the corner of the modal and it is regularly clipped (at least stylistically) by the top menu and general browser window. Clicking the "i" button again does close the modal, but that could perhaps be unintuitive to some users who are used to the "clickout" or "x" closure approaches previously described.
[Video](https://drive.google.com/file/d/1jC4PwX15Icka_-b5HWdKQEVGJNP2VtQJ/view?usp=sharing)
[Test Environment](https://github.com/hydroshare/hydroshare/files/2146073/180628_001_test-env.txt)
| 1.0 | Discover page info modal does not close when clicking outside the modal - The modal does not close when clicking in another part of the page. There is also no "X" in the corner of the modal and it is regularly clipped (at least stylistically) by the top menu and general browser window. Clicking the "i" button again does close the modal, but that could perhaps be unintuitive to some users who are used to the "clickout" or "x" closure approaches previously described.
[Video](https://drive.google.com/file/d/1jC4PwX15Icka_-b5HWdKQEVGJNP2VtQJ/view?usp=sharing)
[Test Environment](https://github.com/hydroshare/hydroshare/files/2146073/180628_001_test-env.txt)
| non_main | discover page info modal does not close when clicking outside the modal the modal does not close when clicking in another part of the page there is also no x in the corner of the modal and it is regularly clipped at least stylistically by the top menu and general browser window clicking the i button again does close the modal but that could perhaps be unintuitive to some users who are used to the clickout or x closure approaches previously described | 0 |
1,624 | 6,572,650,469 | IssuesEvent | 2017-09-11 04:04:51 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | elasticsearch_plugin does not work with Elasticsearch 5.x | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
elasticsearch_plugin module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/william/.ansible.cfg
configured module search path = ['/usr/share/ansible', 'playbooks/library']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
The control host is running Ubuntu 16.04.
The target hosts are running CentOS 7.2 and CentOS 6.8
##### SUMMARY
<!--- Explain the problem briefly -->
The elasticsearch_plugin fails to run on Elasticsearch 5.x.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
I tried two variations.
The first was a simple task:
```yaml
- name: Install Elasticsearch S3 repository plugin (ES 5.x)
elasticsearch_plugin:
name: repository-s3
```
The second was a task with the binary path specified:
```yaml
- name: Install Elasticsearch S3 repository plugin (ES 5.x)
elasticsearch_plugin:
name: repository-s3
plugin_bin: '/usr/share/elasticsearch/bin/elasticsearch-plugin'
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
It would install the repository-s3 plugin.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
The simple task (without specifying the binary path) produced the following output:
```
fatal: [es-c6-5]: FAILED! => {
"changed": false,
"cmd": "/usr/share/elasticsearch/bin/plugin install repository-s3 --timeout 1m",
"failed": true,
"invocation": {
"module_args": {
"name": "repository-s3",
"plugin_bin": "/usr/share/elasticsearch/bin/plugin",
"plugin_dir": "/usr/share/elasticsearch/plugins/",
"proxy_host": null,
"proxy_port": null,
"state": "present",
"timeout": "1m",
"url": null,
"version": null
},
"module_name": "elasticsearch_plugin"
},
"msg": "[Errno 2] No such file or directory",
"rc": 2
}
```
The task where I explicitly provided the plugin binary path on Elasticsearch 5.x provided the following output:
```
fatal: [es-c6-5]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"name": "repository-s3",
"plugin_bin": "/usr/share/elasticsearch/bin/elasticsearch-plugin",
"plugin_dir": "/usr/share/elasticsearch/plugins/",
"proxy_host": null,
"proxy_port": null,
"state": "present",
"timeout": "1m",
"url": null,
"version": null
},
"module_name": "elasticsearch_plugin"
},
"msg": "A tool for managing installed elasticsearch plugins\n\nCommands\n--------\nlist - Lists installed elasticsearch plugins\ninstall - Install a plugin\nremove - Removes a plugin from elasticsearch\n\nNon-option arguments:\ncommand \n\nOption Description \n------ ----------- \n-h, --help show help \n-s, --silent show minimal output\n-v, --verbose show verbose output\nERROR: timeout is not a recognized option\n"
}
``` | True | elasticsearch_plugin does not work with Elasticsearch 5.x - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
elasticsearch_plugin module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file = /home/william/.ansible.cfg
configured module search path = ['/usr/share/ansible', 'playbooks/library']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
The control host is running Ubuntu 16.04.
The target hosts are running CentOS 7.2 and CentOS 6.8
##### SUMMARY
<!--- Explain the problem briefly -->
The elasticsearch_plugin fails to run on Elasticsearch 5.x.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
I tried two variations.
The first was a simple task:
```yaml
- name: Install Elasticsearch S3 repository plugin (ES 5.x)
elasticsearch_plugin:
name: repository-s3
```
The second was a task with the binary path specified:
```yaml
- name: Install Elasticsearch S3 repository plugin (ES 5.x)
elasticsearch_plugin:
name: repository-s3
plugin_bin: '/usr/share/elasticsearch/bin/elasticsearch-plugin'
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
It would install the repository-s3 plugin.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
The simple task (without specifying the binary path) produced the following output:
```
fatal: [es-c6-5]: FAILED! => {
"changed": false,
"cmd": "/usr/share/elasticsearch/bin/plugin install repository-s3 --timeout 1m",
"failed": true,
"invocation": {
"module_args": {
"name": "repository-s3",
"plugin_bin": "/usr/share/elasticsearch/bin/plugin",
"plugin_dir": "/usr/share/elasticsearch/plugins/",
"proxy_host": null,
"proxy_port": null,
"state": "present",
"timeout": "1m",
"url": null,
"version": null
},
"module_name": "elasticsearch_plugin"
},
"msg": "[Errno 2] No such file or directory",
"rc": 2
}
```
The task where I explicitly provided the plugin binary path on Elasticsearch 5.x provided the following output:
```
fatal: [es-c6-5]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"name": "repository-s3",
"plugin_bin": "/usr/share/elasticsearch/bin/elasticsearch-plugin",
"plugin_dir": "/usr/share/elasticsearch/plugins/",
"proxy_host": null,
"proxy_port": null,
"state": "present",
"timeout": "1m",
"url": null,
"version": null
},
"module_name": "elasticsearch_plugin"
},
"msg": "A tool for managing installed elasticsearch plugins\n\nCommands\n--------\nlist - Lists installed elasticsearch plugins\ninstall - Install a plugin\nremove - Removes a plugin from elasticsearch\n\nNon-option arguments:\ncommand \n\nOption Description \n------ ----------- \n-h, --help show help \n-s, --silent show minimal output\n-v, --verbose show verbose output\nERROR: timeout is not a recognized option\n"
}
``` | main | elasticsearch plugin does not work with elasticsearch x issue type bug report component name elasticsearch plugin module ansible version ansible config file home william ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific the control host is running ubuntu the target hosts are running centos and centos summary the elasticsearch plugin fails to run on elasticsearch x steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used i tried two variations the first was a simple task yaml name install elasticsearch repository plugin es x elasticsearch plugin name repository the second was a task with the binary path specified yaml name install elasticsearch repository plugin es x elasticsearch plugin name repository plugin bin usr share elasticsearch bin elasticsearch plugin expected results it would install the repository plugin actual results the simple task without specifying the binary path produced the following output fatal failed changed false cmd usr share elasticsearch bin plugin install repository timeout failed true invocation module args name repository plugin bin usr share elasticsearch bin plugin plugin dir usr share elasticsearch plugins proxy host null proxy port null state present timeout url null version null module name elasticsearch plugin msg no such file or directory rc the task where i explicitly provided the plugin binary path on elasticsearch x provided the following output fatal failed changed false failed true invocation module args name repository plugin bin usr share elasticsearch bin elasticsearch plugin plugin dir usr share elasticsearch plugins proxy host null proxy port null state present timeout url null version null module name elasticsearch plugin msg a tool for managing installed elasticsearch plugins n ncommands n nlist lists installed elasticsearch plugins ninstall install a plugin nremove removes a plugin from elasticsearch n nnon option arguments ncommand n noption description n n h help show help n s silent show minimal output n v verbose show verbose output nerror timeout is not a recognized option n | 1 |
56,469 | 14,078,436,629 | IssuesEvent | 2020-11-04 13:34:18 | themagicalmammal/android_kernel_samsung_a5xelte | https://api.github.com/repos/themagicalmammal/android_kernel_samsung_a5xelte | opened | CVE-2018-11508 (Medium) detected in linuxv3.10 | security vulnerability | ## CVE-2018-11508 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.10</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_a5xelte/commit/738375813823cb33918102af385bdd5d82225e17">738375813823cb33918102af385bdd5d82225e17</a></p>
<p>Found in base branch: <b>cosmic-1.6-experimental</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (0)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The compat_get_timex function in kernel/compat.c in the Linux kernel before 4.16.9 allows local users to obtain sensitive information from kernel memory via adjtimex.
<p>Publish Date: 2018-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11508>CVE-2018-11508</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-11508">https://nvd.nist.gov/vuln/detail/CVE-2018-11508</a></p>
<p>Release Date: 2018-05-28</p>
<p>Fix Resolution: 4.16.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-11508 (Medium) detected in linuxv3.10 - ## CVE-2018-11508 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.10</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/themagicalmammal/android_kernel_samsung_a5xelte/commit/738375813823cb33918102af385bdd5d82225e17">738375813823cb33918102af385bdd5d82225e17</a></p>
<p>Found in base branch: <b>cosmic-1.6-experimental</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (0)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The compat_get_timex function in kernel/compat.c in the Linux kernel before 4.16.9 allows local users to obtain sensitive information from kernel memory via adjtimex.
<p>Publish Date: 2018-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11508>CVE-2018-11508</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-11508">https://nvd.nist.gov/vuln/detail/CVE-2018-11508</a></p>
<p>Release Date: 2018-05-28</p>
<p>Fix Resolution: 4.16.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch cosmic experimental vulnerable source files vulnerability details the compat get timex function in kernel compat c in the linux kernel before allows local users to obtain sensitive information from kernel memory via adjtimex publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
869 | 4,536,193,279 | IssuesEvent | 2016-09-08 19:39:37 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | winrm NanoServer Unable to find type System.Security.Cryptography.SHA1CryptoServiceProvider | affects_2.1 bug_report waiting_on_maintainer windows | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_ping
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
```
##### CONFIGURATION
default
##### OS / ENVIRONMENT
Ubuntu 16.04.1 LTS
##### SUMMARY
Unable to establish winrm connection to new Windows Server Nano
##### STEPS TO REPRODUCE
ansible windows -i inventory/host -m win_ping
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Establish winrm connection
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
Loaded callback minimal of type stdout, v2.0
<10.0.0.5> ESTABLISH WINRM CONNECTION FOR USER: xxxx on PORT 5986 TO 10.0.0.5
<10.0.0.5> EXEC Set-StrictMode -Version Latest
(New-Item -Type Directory -Path $env:temp -Name "ansible-tmp-1471290564.47-58300212757313").FullName | Write-Host -Separator '';
<10.0.0.5> PUT "/tmp/tmpiUuwzr" TO "C:\Users\mrembas\AppData\Local\Temp\ansible-tmp-1471290564.47-58300212757313\win_ping.ps1"
10.0.0.5 | FAILED! => {
"failed": true,
"msg": "#< CLIXML\r\n<Objs Version=\"1.1.0.1\" xmlns=\"http://schemas.microsoft.com/powershell/2004/04\"><S S=\"Error\">Unable to find type [System.Security.Cryptography.SHA1CryptoServiceProvider]._x000D__x000A_</S><S S=\"Error\">At line:7 char:9_x000D__x000A_</S><S S=\"Error\">+ $sha1 = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Cre ..._x000D__x000A_</S><S S=\"Error\">+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~_x000D__x000A_</S><S S=\"Error\"> + CategoryInfo : InvalidOperation: (System.Security...ServiceProv _x000D__x000A_</S><S S=\"Error\"> ider:TypeName) [], ParentContainsErrorRecordException_x000D__x000A_</S><S S=\"Error\"> + FullyQualifiedErrorId : TypeNotFound_x000D__x000A_</S><S S=\"Error\"> _x000D__x000A_</S></Objs>"
}
```
| True | winrm NanoServer Unable to find type System.Security.Cryptography.SHA1CryptoServiceProvider - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
win_ping
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
```
##### CONFIGURATION
default
##### OS / ENVIRONMENT
Ubuntu 16.04.1 LTS
##### SUMMARY
Unable to establish winrm connection to new Windows Server Nano
##### STEPS TO REPRODUCE
ansible windows -i inventory/host -m win_ping
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Establish winrm connection
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
Loaded callback minimal of type stdout, v2.0
<10.0.0.5> ESTABLISH WINRM CONNECTION FOR USER: xxxx on PORT 5986 TO 10.0.0.5
<10.0.0.5> EXEC Set-StrictMode -Version Latest
(New-Item -Type Directory -Path $env:temp -Name "ansible-tmp-1471290564.47-58300212757313").FullName | Write-Host -Separator '';
<10.0.0.5> PUT "/tmp/tmpiUuwzr" TO "C:\Users\mrembas\AppData\Local\Temp\ansible-tmp-1471290564.47-58300212757313\win_ping.ps1"
10.0.0.5 | FAILED! => {
"failed": true,
"msg": "#< CLIXML\r\n<Objs Version=\"1.1.0.1\" xmlns=\"http://schemas.microsoft.com/powershell/2004/04\"><S S=\"Error\">Unable to find type [System.Security.Cryptography.SHA1CryptoServiceProvider]._x000D__x000A_</S><S S=\"Error\">At line:7 char:9_x000D__x000A_</S><S S=\"Error\">+ $sha1 = [System.Security.Cryptography.SHA1CryptoServiceProvider]::Cre ..._x000D__x000A_</S><S S=\"Error\">+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~_x000D__x000A_</S><S S=\"Error\"> + CategoryInfo : InvalidOperation: (System.Security...ServiceProv _x000D__x000A_</S><S S=\"Error\"> ider:TypeName) [], ParentContainsErrorRecordException_x000D__x000A_</S><S S=\"Error\"> + FullyQualifiedErrorId : TypeNotFound_x000D__x000A_</S><S S=\"Error\"> _x000D__x000A_</S></Objs>"
}
```
| main | winrm nanoserver unable to find type system security cryptography issue type bug report component name win ping ansible version ansible configuration default os environment ubuntu lts summary unable to establish winrm connection to new windows server nano steps to reproduce ansible windows i inventory host m win ping expected results establish winrm connection actual results loaded callback minimal of type stdout establish winrm connection for user xxxx on port to exec set strictmode version latest new item type directory path env temp name ansible tmp fullname write host separator put tmp tmpiuuwzr to c users mrembas appdata local temp ansible tmp win ping failed failed true msg unable to find type at line char cre categoryinfo invalidoperation system security serviceprov ider typename parentcontainserrorrecordexception fullyqualifiederrorid typenotfound | 1 |
1,647 | 6,572,672,849 | IssuesEvent | 2017-09-11 04:17:32 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | source address not set with firewalld module | affects_2.2 bug_report waiting_on_maintainer | This issue sounded like #1808 but I gave the fix a try and it doesn't fix my issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
firewalld
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/niek/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
nocows = 1
##### OS / ENVIRONMENT
control node: Fedora 24 (soon to be 25)
server to manage: CloudLinux release 7.2 (Valeri Kubasov), so basically Centos 7.2
##### SUMMARY
Using source parameter of the firewalld module does not seem to work and it ends up in the firewall as 0.0.0.0.
##### STEPS TO REPRODUCE
Use the the following code to try and manage the firewalld rules.
```
firewalld: source="1.2.3.4" port=1234/tcp permanent=true state=enabled
```
##### EXPECTED RESULTS
Only the source range 1.2.3.4 can reach the port 1234.
```
Chain IN_public_allow (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW
0 0 ACCEPT tcp -- * * 1.2.3.4/0 0.0.0.0/0 tcp dpt:1234 ctstate NEW
```
##### ACTUAL RESULTS
The source range is not set properly meaning everyone (0.0.0.0) can access the port.
```
Chain IN_public_allow (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:1234 ctstate NEW
```
| True | source address not set with firewalld module - This issue sounded like #1808 but I gave the fix a try and it doesn't fix my issue.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
firewalld
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/niek/.ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
nocows = 1
##### OS / ENVIRONMENT
control node: Fedora 24 (soon to be 25)
server to manage: CloudLinux release 7.2 (Valeri Kubasov), so basically Centos 7.2
##### SUMMARY
Using source parameter of the firewalld module does not seem to work and it ends up in the firewall as 0.0.0.0.
##### STEPS TO REPRODUCE
Use the the following code to try and manage the firewalld rules.
```
firewalld: source="1.2.3.4" port=1234/tcp permanent=true state=enabled
```
##### EXPECTED RESULTS
Only the source range 1.2.3.4 can reach the port 1234.
```
Chain IN_public_allow (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW
0 0 ACCEPT tcp -- * * 1.2.3.4/0 0.0.0.0/0 tcp dpt:1234 ctstate NEW
```
##### ACTUAL RESULTS
The source range is not set properly meaning everyone (0.0.0.0) can access the port.
```
Chain IN_public_allow (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:1234 ctstate NEW
```
| main | source address not set with firewalld module this issue sounded like but i gave the fix a try and it doesn t fix my issue issue type bug report component name firewalld ansible version ansible config file home niek ansible cfg configured module search path default w o overrides configuration nocows os environment control node fedora soon to be server to manage cloudlinux release valeri kubasov so basically centos summary using source parameter of the firewalld module does not seem to work and it ends up in the firewall as steps to reproduce use the the following code to try and manage the firewalld rules firewalld source port tcp permanent true state enabled expected results only the source range can reach the port chain in public allow references pkts bytes target prot opt in out source destination accept tcp tcp dpt ctstate new accept tcp tcp dpt ctstate new actual results the source range is not set properly meaning everyone can access the port chain in public allow references pkts bytes target prot opt in out source destination accept tcp tcp dpt ctstate new accept tcp tcp dpt ctstate new | 1 |
6,788 | 9,086,468,365 | IssuesEvent | 2019-02-18 11:01:24 | epoberezkin/ajv | https://api.github.com/repos/epoberezkin/ajv | opened | Request to make console optional for default construction | compatibility | <!--
Frequently Asked Questions: https://github.com/epoberezkin/ajv/blob/master/FAQ.md
Please provide all info and reduce your schema and data to the smallest possible size.
This template is for compatibility issues.
For other issues please see https://github.com/epoberezkin/ajv/blob/master/CONTRIBUTING.md
-->
**The version of Ajv you are using**
latest
**The environment you have the problem with**
NodeJS vm.js
**Your code (please make it as small as possible to reproduce the issue)**
```
var Ajv = require('ajv'),
ajv = new Ajv(); // note {logger: console} not provided
```
**Issue**
Ajv assumes console to be present in the environment. It is an acceptable for all normal use cases. But we use AJV within Node VM module and that is sandboxed to not have console.
The following line https://github.com/epoberezkin/ajv/blob/master/lib/ajv.js#L488 assumes console to be present. This results in issues such as https://github.com/postmanlabs/postman-app-support/issues/3199 and requires to have a longer boilerplate code.
If it is okay, we can send a PR that checks if console is absent even in global scope and injects a stub. | True | Request to make console optional for default construction - <!--
Frequently Asked Questions: https://github.com/epoberezkin/ajv/blob/master/FAQ.md
Please provide all info and reduce your schema and data to the smallest possible size.
This template is for compatibility issues.
For other issues please see https://github.com/epoberezkin/ajv/blob/master/CONTRIBUTING.md
-->
**The version of Ajv you are using**
latest
**The environment you have the problem with**
NodeJS vm.js
**Your code (please make it as small as possible to reproduce the issue)**
```
var Ajv = require('ajv'),
ajv = new Ajv(); // note {logger: console} not provided
```
**Issue**
Ajv assumes console to be present in the environment. It is an acceptable for all normal use cases. But we use AJV within Node VM module and that is sandboxed to not have console.
The following line https://github.com/epoberezkin/ajv/blob/master/lib/ajv.js#L488 assumes console to be present. This results in issues such as https://github.com/postmanlabs/postman-app-support/issues/3199 and requires to have a longer boilerplate code.
If it is okay, we can send a PR that checks if console is absent even in global scope and injects a stub. | non_main | request to make console optional for default construction frequently asked questions please provide all info and reduce your schema and data to the smallest possible size this template is for compatibility issues for other issues please see the version of ajv you are using latest the environment you have the problem with nodejs vm js your code please make it as small as possible to reproduce the issue var ajv require ajv ajv new ajv note logger console not provided issue ajv assumes console to be present in the environment it is an acceptable for all normal use cases but we use ajv within node vm module and that is sandboxed to not have console the following line assumes console to be present this results in issues such as and requires to have a longer boilerplate code if it is okay we can send a pr that checks if console is absent even in global scope and injects a stub | 0 |
137,918 | 18,769,544,164 | IssuesEvent | 2021-11-06 15:28:12 | samqws-marketing/box_box-ui-elements | https://api.github.com/repos/samqws-marketing/box_box-ui-elements | opened | CVE-2018-16469 (High) detected in merge-1.2.0.tgz | security vulnerability | ## CVE-2018-16469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>merge-1.2.0.tgz</b></p></summary>
<p>Merge multiple objects into one, optionally creating a new cloned object. Similar to the jQuery.extend but more flexible. Works in Node.js and the browser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/merge/-/merge-1.2.0.tgz">https://registry.npmjs.org/merge/-/merge-1.2.0.tgz</a></p>
<p>Path to dependency file: box_box-ui-elements/package.json</p>
<p>Path to vulnerable library: box_box-ui-elements/node_modules/merge/package.json</p>
<p>
Dependency Hierarchy:
- sass-lint-1.13.1.tgz (Root Library)
- :x: **merge-1.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/box_box-ui-elements/commit/4fc776e2b95c8b497f6994cb2165365562ae1f82">4fc776e2b95c8b497f6994cb2165365562ae1f82</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The merge.recursive function in the merge package <1.2.1 can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects allowing for a denial of service attack.
<p>Publish Date: 2018-10-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16469>CVE-2018-16469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16469</a></p>
<p>Release Date: 2018-10-30</p>
<p>Fix Resolution: v1.2.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"merge","packageVersion":"1.2.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"sass-lint:1.13.1;merge:1.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.2.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-16469","vulnerabilityDetails":"The merge.recursive function in the merge package \u003c1.2.1 can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects allowing for a denial of service attack.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16469","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2018-16469 (High) detected in merge-1.2.0.tgz - ## CVE-2018-16469 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>merge-1.2.0.tgz</b></p></summary>
<p>Merge multiple objects into one, optionally creating a new cloned object. Similar to the jQuery.extend but more flexible. Works in Node.js and the browser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/merge/-/merge-1.2.0.tgz">https://registry.npmjs.org/merge/-/merge-1.2.0.tgz</a></p>
<p>Path to dependency file: box_box-ui-elements/package.json</p>
<p>Path to vulnerable library: box_box-ui-elements/node_modules/merge/package.json</p>
<p>
Dependency Hierarchy:
- sass-lint-1.13.1.tgz (Root Library)
- :x: **merge-1.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/box_box-ui-elements/commit/4fc776e2b95c8b497f6994cb2165365562ae1f82">4fc776e2b95c8b497f6994cb2165365562ae1f82</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The merge.recursive function in the merge package <1.2.1 can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects allowing for a denial of service attack.
<p>Publish Date: 2018-10-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16469>CVE-2018-16469</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16469">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-16469</a></p>
<p>Release Date: 2018-10-30</p>
<p>Fix Resolution: v1.2.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"merge","packageVersion":"1.2.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"sass-lint:1.13.1;merge:1.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.2.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-16469","vulnerabilityDetails":"The merge.recursive function in the merge package \u003c1.2.1 can be tricked into adding or modifying properties of the Object prototype. These properties will be present on all objects allowing for a denial of service attack.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16469","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in merge tgz cve high severity vulnerability vulnerable library merge tgz merge multiple objects into one optionally creating a new cloned object similar to the jquery extend but more flexible works in node js and the browser library home page a href path to dependency file box box ui elements package json path to vulnerable library box box ui elements node modules merge package json dependency hierarchy sass lint tgz root library x merge tgz vulnerable library found in head commit a href found in base branch master vulnerability details the merge recursive function in the merge package can be tricked into adding or modifying properties of the object prototype these properties will be present on all objects allowing for a denial of service attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree sass lint merge isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the merge recursive function in the merge package can be tricked into adding or modifying properties of the object prototype these properties will be present on all objects allowing for a denial of service attack vulnerabilityurl | 0 |
61,397 | 12,189,892,586 | IssuesEvent | 2020-04-29 08:21:37 | halotroop/LiteCraft | https://api.github.com/repos/halotroop/LiteCraft | closed | [Ginger3D] Duplicate math classes | bad code | All vector and matrix classes should be removed and replaced with their JOML equivalents.
This is challenging because these custom classes have methods which do not exist in JOML.
It's probably caused by copying code meant for LWJGL2 (which did not use JOML). | 1.0 | [Ginger3D] Duplicate math classes - All vector and matrix classes should be removed and replaced with their JOML equivalents.
This is challenging because these custom classes have methods which do not exist in JOML.
It's probably caused by copying code meant for LWJGL2 (which did not use JOML). | non_main | duplicate math classes all vector and matrix classes should be removed and replaced with their joml equivalents this is challenging because these custom classes have methods which do not exist in joml it s probably caused by copying code meant for which did not use joml | 0 |
2,971 | 10,684,722,593 | IssuesEvent | 2019-10-22 11:07:56 | valbergconsulting/bitcore-abc | https://api.github.com/repos/valbergconsulting/bitcore-abc | closed | Spent index is not updated in ApplyBlockUndo | bug maintainance | In `ApplyBlockUndo` there are updates created for the spent index, but they are never written to the database.
It needs to be investigated if this is intended or not. If it's intended, remove dead code. If it's a bug, fix it and write a test for it.
I've looked back to bitcore-14.6, the code is the same there. I did not investigate further back in history.
RPC calls that use the spentindex are: getspentinfo, getblockdeltas, getrawtransaction | True | Spent index is not updated in ApplyBlockUndo - In `ApplyBlockUndo` there are updates created for the spent index, but they are never written to the database.
It needs to be investigated if this is intended or not. If it's intended, remove dead code. If it's a bug, fix it and write a test for it.
I've looked back to bitcore-14.6, the code is the same there. I did not investigate further back in history.
RPC calls that use the spentindex are: getspentinfo, getblockdeltas, getrawtransaction | main | spent index is not updated in applyblockundo in applyblockundo there are updates created for the spent index but they are never written to the database it needs to be investigated if this is intended or not if it s intended remove dead code if it s a bug fix it and write a test for it i ve looked back to bitcore the code is the same there i did not investigate further back in history rpc calls that use the spentindex are getspentinfo getblockdeltas getrawtransaction | 1 |
3,488 | 13,614,303,065 | IssuesEvent | 2020-09-23 13:04:55 | chaoss/website | https://api.github.com/repos/chaoss/website | closed | Add link to blog and news from home page | Maintainer Task | We have a section "News and Blog" on the home page that has the last three items. It would be great to add a link below them to "read older entries" or something like that.

| True | Add link to blog and news from home page - We have a section "News and Blog" on the home page that has the last three items. It would be great to add a link below them to "read older entries" or something like that.

| main | add link to blog and news from home page we have a section news and blog on the home page that has the last three items it would be great to add a link below them to read older entries or something like that | 1 |
409,419 | 11,962,057,387 | IssuesEvent | 2020-04-05 10:55:09 | python-discord/bot | https://api.github.com/repos/python-discord/bot | closed | Remove arguments from !free command | area: information good first issue priority: 3 - low type: feature | The [`!free` command](https://github.com/python-discord/bot/blob/master/bot/cogs/free.py) doesn't ping the user because the ping is inside the embed. The mention needs to be moved outside of the embed if it's to be used.
After discussion, instead the arguments for the command and the `Hey @blah` line is to be removed. | 1.0 | Remove arguments from !free command - The [`!free` command](https://github.com/python-discord/bot/blob/master/bot/cogs/free.py) doesn't ping the user because the ping is inside the embed. The mention needs to be moved outside of the embed if it's to be used.
After discussion, instead the arguments for the command and the `Hey @blah` line is to be removed. | non_main | remove arguments from free command the doesn t ping the user because the ping is inside the embed the mention needs to be moved outside of the embed if it s to be used after discussion instead the arguments for the command and the hey blah line is to be removed | 0 |
814,417 | 30,506,608,691 | IssuesEvent | 2023-07-18 17:22:08 | pepkit/pephub | https://api.github.com/repos/pepkit/pephub | closed | Database should store samples individually | enhancement priority low | Right now, the database stores Project objects in a table.
Instead, it would be more flexible to split samples and projects into separate tables, and link them. Then, updating a sample wouldn't require updating the entire project. | 1.0 | Database should store samples individually - Right now, the database stores Project objects in a table.
Instead, it would be more flexible to split samples and projects into separate tables, and link them. Then, updating a sample wouldn't require updating the entire project. | non_main | database should store samples individually right now the database stores project objects in a table instead it would be more flexible to split samples and projects into separate tables and link them then updating a sample wouldn t require updating the entire project | 0 |
31,620 | 11,957,482,386 | IssuesEvent | 2020-04-04 14:32:43 | dropwizard/dropwizard | https://api.github.com/repos/dropwizard/dropwizard | closed | update snakeyaml to 1.26+ to address security vulnerability CVE-2017-18640 | security | DESCRIPTION FROM CVE
The Alias feature in SnakeYAML 1.18 allows entity expansion during a load operation, a related issue to CVE-2003-1564.
EXPLANATION
The snakeyaml package is vulnerable to YAML Entity Expansion. The load method in Yaml.class allows for entities to reference other entities. An attacker could potentially exploit this behavior by providing a YAML document with many entities that reference each other, which could take a large amount of memory to process, potentially resulting in a Denial of Service (DoS) situation.
DETECTION
The application is vulnerable by using this component with untrusted user input when the maxAliasesForCollections is set too high or settings.setAllowRecursiveKeys is set to false.
RECOMMENDATION
We recommend upgrading to a version of this component that is not vulnerable to this specific issue.
Note: If this component is included as a bundled/transitive dependency of another component, there may not be an upgrade path. In this instance, we recommend contacting the maintainers who included the vulnerable package. Alternatively, we recommend investigating alternative components or a potential mitigating control.
ROOT CAUSE
snakeyaml-1.24-android.jarorg/yaml/snakeyaml/constructor/BaseConstructor.class( , 1.26)
ADVISORIES
Project:https://bitbucket.org/asomov/snakeyaml/issues/377/allow-configuration-for-preventing-billion
CVSS DETAILS
CVE CVSS 3:7.5
CVSS Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H | True | update snakeyaml to 1.26+ to address security vulnerability CVE-2017-18640 - DESCRIPTION FROM CVE
The Alias feature in SnakeYAML 1.18 allows entity expansion during a load operation, a related issue to CVE-2003-1564.
EXPLANATION
The snakeyaml package is vulnerable to YAML Entity Expansion. The load method in Yaml.class allows for entities to reference other entities. An attacker could potentially exploit this behavior by providing a YAML document with many entities that reference each other, which could take a large amount of memory to process, potentially resulting in a Denial of Service (DoS) situation.
DETECTION
The application is vulnerable by using this component with untrusted user input when the maxAliasesForCollections is set too high or settings.setAllowRecursiveKeys is set to false.
RECOMMENDATION
We recommend upgrading to a version of this component that is not vulnerable to this specific issue.
Note: If this component is included as a bundled/transitive dependency of another component, there may not be an upgrade path. In this instance, we recommend contacting the maintainers who included the vulnerable package. Alternatively, we recommend investigating alternative components or a potential mitigating control.
ROOT CAUSE
snakeyaml-1.24-android.jarorg/yaml/snakeyaml/constructor/BaseConstructor.class( , 1.26)
ADVISORIES
Project:https://bitbucket.org/asomov/snakeyaml/issues/377/allow-configuration-for-preventing-billion
CVSS DETAILS
CVE CVSS 3:7.5
CVSS Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H | non_main | update snakeyaml to to address security vulnerability cve description from cve the alias feature in snakeyaml allows entity expansion during a load operation a related issue to cve explanation the snakeyaml package is vulnerable to yaml entity expansion the load method in yaml class allows for entities to reference other entities an attacker could potentially exploit this behavior by providing a yaml document with many entities that reference each other which could take a large amount of memory to process potentially resulting in a denial of service dos situation detection the application is vulnerable by using this component with untrusted user input when the maxaliasesforcollections is set too high or settings setallowrecursivekeys is set to false recommendation we recommend upgrading to a version of this component that is not vulnerable to this specific issue note if this component is included as a bundled transitive dependency of another component there may not be an upgrade path in this instance we recommend contacting the maintainers who included the vulnerable package alternatively we recommend investigating alternative components or a potential mitigating control root cause snakeyaml android jarorg yaml snakeyaml constructor baseconstructor class advisories project cvss details cve cvss cvss vector cvss av n ac l pr n ui n s u c n i n a h | 0 |
1,619 | 6,572,644,493 | IssuesEvent | 2017-09-11 04:01:41 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | filesystem does not support multiple devices (on btrfs f.e.) | affects_2.1 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`filesystem` module
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /home/pasha/Projects/Ansible.cfg/ansible.cfg
configured module search path = ['modules/']
```
##### CONFIGURATION
Does not have sence
##### OS / ENVIRONMENT
Fedora repos
##### SUMMARY
Task:
```
- name: Create btrfs filesystem
filesystem: fstype=btrfs dev='/dev/mapper/centos-home' opts='--label srv'
```
work as expected, but:
```
- name: Create btrfs filesystem
filesystem: fstype=btrfs dev='/dev/sda3 /dev/sdb' opts='-d single --label srv'
```
Produce error: **Device /dev/sda3 /dev/sdb not found.**
##### EXPECTED RESULTS
`Btrfs` (and some other like `zfs` too) allow create filesystems across multiple devices - https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
So, it seams reasonable make `dev` parameter the list type or just allow pass any string.
| True | filesystem does not support multiple devices (on btrfs f.e.) - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`filesystem` module
##### ANSIBLE VERSION
```
ansible 2.1.2.0
config file = /home/pasha/Projects/Ansible.cfg/ansible.cfg
configured module search path = ['modules/']
```
##### CONFIGURATION
Does not have sence
##### OS / ENVIRONMENT
Fedora repos
##### SUMMARY
Task:
```
- name: Create btrfs filesystem
filesystem: fstype=btrfs dev='/dev/mapper/centos-home' opts='--label srv'
```
work as expected, but:
```
- name: Create btrfs filesystem
filesystem: fstype=btrfs dev='/dev/sda3 /dev/sdb' opts='-d single --label srv'
```
Produce error: **Device /dev/sda3 /dev/sdb not found.**
##### EXPECTED RESULTS
`Btrfs` (and some other like `zfs` too) allow create filesystems across multiple devices - https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
So, it seams reasonable make `dev` parameter the list type or just allow pass any string.
| main | filesystem does not support multiple devices on btrfs f e issue type bug report component name filesystem module ansible version ansible config file home pasha projects ansible cfg ansible cfg configured module search path configuration does not have sence os environment fedora repos summary task name create btrfs filesystem filesystem fstype btrfs dev dev mapper centos home opts label srv work as expected but name create btrfs filesystem filesystem fstype btrfs dev dev dev sdb opts d single label srv produce error device dev dev sdb not found expected results btrfs and some other like zfs too allow create filesystems across multiple devices so it seams reasonable make dev parameter the list type or just allow pass any string | 1 |
4,766 | 24,545,805,719 | IssuesEvent | 2022-10-12 08:42:19 | ansible-collections/community.general | https://api.github.com/repos/ansible-collections/community.general | closed | community.general.newrelic_deployment is broken | bug module has_pr plugins monitoring needs_maintainer | ### Summary
The module is using a NewRelic endpoint that is no longer available
` # Send the data to NewRelic
url = "https://rpm.newrelic.com/deployments.xml"
data = urlencode(params)
headers = {
'x-api-key': module.params["token"],
}`
It should be https://api.newrelic.com/v2/applications/{application_id}/deployments.json
[Here](https://rpm.newrelic.com/api/explore/application_deployments/create) is the documentation
### Issue Type
Bug Report
### Component Name
community.general.newrelic_deployment
### Ansible Version
```
ansible 2.10.8
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Community.general Version
```
Collection Version
----------------- -------
community.general 5.0.1
```
### Steps to Reproduce
```
---
- name: Deploy
hosts: 127.0.0.1
tasks:
- name: Notify newrelic about an app deployment
community.general.newrelic_deployment:
token: <insert-token>
app_name: 'some_app'
revision: 'v7.11'
```
### Expected Results
I expected the module to work, got a http code 403 instead
### Actual Results
```
FAILED! => {"changed": false, "msg": "unable to update newrelic: HTTP Error 403: Forbidden"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | True | community.general.newrelic_deployment is broken - ### Summary
The module is using a NewRelic endpoint that is no longer available
` # Send the data to NewRelic
url = "https://rpm.newrelic.com/deployments.xml"
data = urlencode(params)
headers = {
'x-api-key': module.params["token"],
}`
It should be https://api.newrelic.com/v2/applications/{application_id}/deployments.json
[Here](https://rpm.newrelic.com/api/explore/application_deployments/create) is the documentation
### Issue Type
Bug Report
### Component Name
community.general.newrelic_deployment
### Ansible Version
```
ansible 2.10.8
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### Community.general Version
```
Collection Version
----------------- -------
community.general 5.0.1
```
### Steps to Reproduce
```
---
- name: Deploy
hosts: 127.0.0.1
tasks:
- name: Notify newrelic about an app deployment
community.general.newrelic_deployment:
token: <insert-token>
app_name: 'some_app'
revision: 'v7.11'
```
### Expected Results
I expected the module to work, got a http code 403 instead
### Actual Results
```
FAILED! => {"changed": false, "msg": "unable to update newrelic: HTTP Error 403: Forbidden"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | main | community general newrelic deployment is broken summary the module is using a newrelic endpoint that is no longer available send the data to newrelic url data urlencode params headers x api key module params it should be is the documentation issue type bug report component name community general newrelic deployment ansible version ansible config file none configured module search path ansible python module location usr lib dist packages ansible executable location usr bin ansible python version default feb community general version collection version community general steps to reproduce name deploy hosts tasks name notify newrelic about an app deployment community general newrelic deployment token app name some app revision expected results i expected the module to work got a http code instead actual results failed changed false msg unable to update newrelic http error forbidden code of conduct i agree to follow the ansible code of conduct | 1 |
290,947 | 8,915,263,610 | IssuesEvent | 2019-01-19 03:38:43 | SETI/pds-opus | https://api.github.com/repos/SETI/pds-opus | closed | collections/addrange needs to work without a page number | A-Enhancement B-OPUS Django Effort 2 Medium Priority 2 | The range opus_id1, opus_id2 needs to work even without a page number, so that it can apply to the new no-page-number style of the new UI.
| 1.0 | collections/addrange needs to work without a page number - The range opus_id1, opus_id2 needs to work even without a page number, so that it can apply to the new no-page-number style of the new UI.
| non_main | collections addrange needs to work without a page number the range opus opus needs to work even without a page number so that it can apply to the new no page number style of the new ui | 0 |
226,402 | 7,518,863,753 | IssuesEvent | 2018-04-12 09:43:06 | omni-compiler/omni-compiler | https://api.github.com/repos/omni-compiler/omni-compiler | closed | Syntax error with the member array reference to the token keyword | Module: F_Front Priority: High | The following code causes syntax errors.
```fortran
TYPE t
INTEGER :: v
END TYPE
TYPE(t) :: in(1:3)
TYPE(t) :: to(1:3)
in(1)%v = 1
to(1)%v = 2
END
```
`in` and `to` are listed as tokens in the `keywords` (See F95-lex.c).
If variables are replaced with other names than those in the `keywords`, the above code works. | 1.0 | Syntax error with the member array reference to the token keyword - The following code causes syntax errors.
```fortran
TYPE t
INTEGER :: v
END TYPE
TYPE(t) :: in(1:3)
TYPE(t) :: to(1:3)
in(1)%v = 1
to(1)%v = 2
END
```
`in` and `to` are listed as tokens in the `keywords` (See F95-lex.c).
If variables are replaced with other names than those in the `keywords`, the above code works. | non_main | syntax error with the member array reference to the token keyword the following code causes syntax errors fortran type t integer v end type type t in type t to in v to v end in and to are listed as tokens in the keywords see lex c if variables are replaced with other names than those in the keywords the above code works | 0 |
5,820 | 30,794,685,226 | IssuesEvent | 2023-07-31 18:53:02 | professor-greebie/SENG8080-1-field_project | https://api.github.com/repos/professor-greebie/SENG8080-1-field_project | opened | Data storage - script | Data Storage and Maintainance | Hi Data Storage team, @prsnt , can you please provide the script for storing the data to the DevOps team? | True | Data storage - script - Hi Data Storage team, @prsnt , can you please provide the script for storing the data to the DevOps team? | main | data storage script hi data storage team prsnt can you please provide the script for storing the data to the devops team | 1 |
319,930 | 27,408,549,075 | IssuesEvent | 2023-03-01 08:57:41 | arkreen/upptime | https://api.github.com/repos/arkreen/upptime | closed | 🛑 Arkreen Test API net_getMinerListByOwner is down | status arkreen-test-api-net-get-miner-list-by-owner | In [`31a59ab`](https://github.com/arkreen/upptime/commit/31a59ab4788f596769a8059066a3ba59bce6a282
), Arkreen Test API net_getMinerListByOwner (https://testapi.arkreen.com/v1) was **down**:
- HTTP code: 200
- Response time: 202 ms
| 1.0 | 🛑 Arkreen Test API net_getMinerListByOwner is down - In [`31a59ab`](https://github.com/arkreen/upptime/commit/31a59ab4788f596769a8059066a3ba59bce6a282
), Arkreen Test API net_getMinerListByOwner (https://testapi.arkreen.com/v1) was **down**:
- HTTP code: 200
- Response time: 202 ms
| non_main | 🛑 arkreen test api net getminerlistbyowner is down in arkreen test api net getminerlistbyowner was down http code response time ms | 0 |
54,696 | 13,922,477,811 | IssuesEvent | 2020-10-21 13:19:27 | solana-labs/solana | https://api.github.com/repos/solana-labs/solana | closed | v1.3: Enable stricter check on rent-exempt accounts on mainnet-beta | security | #### Problem
this needs to be updated for testnet:
https://github.com/solana-labs/solana/blob/e2626dad83e597cbb769e0033fdc3ee597a910ec/runtime/src/rent_collector.rs#L68
#### Context
#11342
#### Cousin
#11681 | True | v1.3: Enable stricter check on rent-exempt accounts on mainnet-beta - #### Problem
this needs to be updated for testnet:
https://github.com/solana-labs/solana/blob/e2626dad83e597cbb769e0033fdc3ee597a910ec/runtime/src/rent_collector.rs#L68
#### Context
#11342
#### Cousin
#11681 | non_main | enable stricter check on rent exempt accounts on mainnet beta problem this needs to be updated for testnet context cousin | 0 |
54,863 | 30,475,079,902 | IssuesEvent | 2023-07-17 15:56:16 | hashgraph/hedera-services | https://api.github.com/repos/hashgraph/hedera-services | opened | `MerkleHashBuilder` does not scale | Performance | ### Description
Running `MerkleHashBenchmarks` with
```
./gradlew :swirlds-unit-tests:common:swirlds-common-test:performanceTest --tests com.swirlds.common.test.merkle.MerkleHashBenchmarks
```
on Mac OS, M1 Max, 10 cores:
```
Hash Small Trees
--- Synchronous hashing ---
Average time to hash: 6566.111us
--- Asynchronous hashing ---
Average time to hash: 2334.022us
Speedup from multithreading: 2.8132172704456084
Hash Large Trees
--- Synchronous hashing ---
Average time to hash: 127828.53us
--- Asynchronous hashing ---
Average time to hash: 43414.15us
Speedup from multithreading: 2.944397851852449
Hash Huge Trees
--- Synchronous hashing ---
Average time to hash: 1647220.5us
--- Asynchronous hashing ---
Average time to hash: 584125.5us
Speedup from multithreading: 2.819977042604714
```
and on Linux, Xeon@2.80GHz, 40 cores:
```
Hash Small Trees
--- Synchronous hashing ---
Average time to hash: 8906.284us
--- Asynchronous hashing ---
Average time to hash: 13589.714us
Speedup from multithreading: 0.6553694948988624
Hash Large Trees
--- Synchronous hashing ---
Average time to hash: 180144.99us
--- Asynchronous hashing ---
Average time to hash: 192195.66us
Speedup from multithreading: 0.93729998897998
Hash Huge Trees
--- Synchronous hashing ---
Average time to hash: 2334200.6us
--- Asynchronous hashing ---
Average time to hash: 2264936.1us
Speedup from multithreading: 1.0305812159557173
```
The parallel hashing algorithm actually causes performance degradation on a system with more cores.
### Steps to reproduce
Run `com.swirlds.common.test.merkle.MerkleHashBenchmarks` as above on systems with a different number of cores.
### Additional context
_No response_
### Hedera network
other
### Version
v0.40.0-SNAPSHOT
### Operating system
Linux | True | `MerkleHashBuilder` does not scale - ### Description
Running `MerkleHashBenchmarks` with
```
./gradlew :swirlds-unit-tests:common:swirlds-common-test:performanceTest --tests com.swirlds.common.test.merkle.MerkleHashBenchmarks
```
on Mac OS, M1 Max, 10 cores:
```
Hash Small Trees
--- Synchronous hashing ---
Average time to hash: 6566.111us
--- Asynchronous hashing ---
Average time to hash: 2334.022us
Speedup from multithreading: 2.8132172704456084
Hash Large Trees
--- Synchronous hashing ---
Average time to hash: 127828.53us
--- Asynchronous hashing ---
Average time to hash: 43414.15us
Speedup from multithreading: 2.944397851852449
Hash Huge Trees
--- Synchronous hashing ---
Average time to hash: 1647220.5us
--- Asynchronous hashing ---
Average time to hash: 584125.5us
Speedup from multithreading: 2.819977042604714
```
and on Linux, Xeon@2.80GHz, 40 cores:
```
Hash Small Trees
--- Synchronous hashing ---
Average time to hash: 8906.284us
--- Asynchronous hashing ---
Average time to hash: 13589.714us
Speedup from multithreading: 0.6553694948988624
Hash Large Trees
--- Synchronous hashing ---
Average time to hash: 180144.99us
--- Asynchronous hashing ---
Average time to hash: 192195.66us
Speedup from multithreading: 0.93729998897998
Hash Huge Trees
--- Synchronous hashing ---
Average time to hash: 2334200.6us
--- Asynchronous hashing ---
Average time to hash: 2264936.1us
Speedup from multithreading: 1.0305812159557173
```
The parallel hashing algorithm actually causes performance degradation on a system with more cores.
### Steps to reproduce
Run `com.swirlds.common.test.merkle.MerkleHashBenchmarks` as above on systems with a different number of cores.
### Additional context
_No response_
### Hedera network
other
### Version
v0.40.0-SNAPSHOT
### Operating system
Linux | non_main | merklehashbuilder does not scale description running merklehashbenchmarks with gradlew swirlds unit tests common swirlds common test performancetest tests com swirlds common test merkle merklehashbenchmarks on mac os max cores hash small trees synchronous hashing average time to hash asynchronous hashing average time to hash speedup from multithreading hash large trees synchronous hashing average time to hash asynchronous hashing average time to hash speedup from multithreading hash huge trees synchronous hashing average time to hash asynchronous hashing average time to hash speedup from multithreading and on linux xeon cores hash small trees synchronous hashing average time to hash asynchronous hashing average time to hash speedup from multithreading hash large trees synchronous hashing average time to hash asynchronous hashing average time to hash speedup from multithreading hash huge trees synchronous hashing average time to hash asynchronous hashing average time to hash speedup from multithreading the parallel hashing algorithm actually causes performance degradation on a system with more cores steps to reproduce run com swirlds common test merkle merklehashbenchmarks as above on systems with a different number of cores additional context no response hedera network other version snapshot operating system linux | 0 |
138,825 | 12,830,246,889 | IssuesEvent | 2020-07-07 01:33:35 | rear/rear | https://api.github.com/repos/rear/rear | closed | Provide a systemd service and timer to run "rear checklayout || rear mkrescue" | documentation enhancement no-issue-activity | In https://github.com/rear/rear/issues/1892 via
https://github.com/rear/rear/commit/89a8f18ec402b439caf4800421644f5bf5d174e5
the /etc/cron.d/rear/ related things were removed for ReaR 2.5
(see that issue for the reasoning behind)
and instead a systemd service and timer to run
```
/usr/sbin/rear checklayout || /usr/sbin/rear mkrescue
```
should be provided and described in the documentation for ReaR 2.6.
See https://github.com/rear/rear/issues/1892#issuecomment-456018031
excerpts:
```
rmetrich commented on Jan 21
...
This automatic cron is broken and leads to
having broken ReaR ISOs at the end,
in my opinion, we should remove this file and
provide a systemd service + timer instead,
which wouldn't be enabled by default.
Example:
rear-rescue-iso.timer
-------------------------------------------------------
[Unit]
Description=ReaR ... Creation Timer Task
Documentation=man:rear(8)
After=network.target
[Timer]
OnCalendar=daily
RandomizedDelaySec=14400
[Install]
WantedBy=multi-user.target
-------------------------------------------------------
rear-rescue-iso.service
-------------------------------------------------------
[Unit]
Description=ReaR ... Creation
Documentation=man:rear(8)
After=network.target
[Service]
Type=simple
ExecStart=/bin/sh -c '/usr/sbin/rear checklayout || /usr/sbin/rear mkrescue'
Restart=no
WatchdogSec=600
BlockIOWeight=100
-------------------------------------------------------
```
| 1.0 | Provide a systemd service and timer to run "rear checklayout || rear mkrescue" - In https://github.com/rear/rear/issues/1892 via
https://github.com/rear/rear/commit/89a8f18ec402b439caf4800421644f5bf5d174e5
the /etc/cron.d/rear/ related things were removed for ReaR 2.5
(see that issue for the reasoning behind)
and instead a systemd service and timer to run
```
/usr/sbin/rear checklayout || /usr/sbin/rear mkrescue
```
should be provided and described in the documentation for ReaR 2.6.
See https://github.com/rear/rear/issues/1892#issuecomment-456018031
excerpts:
```
rmetrich commented on Jan 21
...
This automatic cron is broken and leads to
having broken ReaR ISOs at the end,
in my opinion, we should remove this file and
provide a systemd service + timer instead,
which wouldn't be enabled by default.
Example:
rear-rescue-iso.timer
-------------------------------------------------------
[Unit]
Description=ReaR ... Creation Timer Task
Documentation=man:rear(8)
After=network.target
[Timer]
OnCalendar=daily
RandomizedDelaySec=14400
[Install]
WantedBy=multi-user.target
-------------------------------------------------------
rear-rescue-iso.service
-------------------------------------------------------
[Unit]
Description=ReaR ... Creation
Documentation=man:rear(8)
After=network.target
[Service]
Type=simple
ExecStart=/bin/sh -c '/usr/sbin/rear checklayout || /usr/sbin/rear mkrescue'
Restart=no
WatchdogSec=600
BlockIOWeight=100
-------------------------------------------------------
```
| non_main | provide a systemd service and timer to run rear checklayout rear mkrescue in via the etc cron d rear related things were removed for rear see that issue for the reasoning behind and instead a systemd service and timer to run usr sbin rear checklayout usr sbin rear mkrescue should be provided and described in the documentation for rear see excerpts rmetrich commented on jan this automatic cron is broken and leads to having broken rear isos at the end in my opinion we should remove this file and provide a systemd service timer instead which wouldn t be enabled by default example rear rescue iso timer description rear creation timer task documentation man rear after network target oncalendar daily randomizeddelaysec wantedby multi user target rear rescue iso service description rear creation documentation man rear after network target type simple execstart bin sh c usr sbin rear checklayout usr sbin rear mkrescue restart no watchdogsec blockioweight | 0 |
30,561 | 14,601,762,255 | IssuesEvent | 2020-12-21 09:11:50 | 5app/base5-ui | https://api.github.com/repos/5app/base5-ui | closed | Use smaller colour manipulation library | Performance [zube]: Staging TBT refactor | Chroma JS weighs in at 13.7 kB, while there are smaller libraries like https://color2k.com that weigh less than 3 kB. Look into whether the API of the latter is equally capable and switch over if so. | True | Use smaller colour manipulation library - Chroma JS weighs in at 13.7 kB, while there are smaller libraries like https://color2k.com that weigh less than 3 kB. Look into whether the API of the latter is equally capable and switch over if so. | non_main | use smaller colour manipulation library chroma js weighs in at kb while there are smaller libraries like that weigh less than kb look into whether the api of the latter is equally capable and switch over if so | 0 |
199,047 | 6,980,255,775 | IssuesEvent | 2017-12-13 00:39:48 | kubernetes-incubator/cri-containerd | https://api.github.com/repos/kubernetes-incubator/cri-containerd | closed | Add containerd/cri-containerd monitor. | priority/P2 | In our current kube-up.sh integration, we don't have a [`kube-docker-monitor.service`](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/node.yaml#L42) like health monitor for cri-containerd/containerd.
We should add one. | 1.0 | Add containerd/cri-containerd monitor. - In our current kube-up.sh integration, we don't have a [`kube-docker-monitor.service`](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/node.yaml#L42) like health monitor for cri-containerd/containerd.
We should add one. | non_main | add containerd cri containerd monitor in our current kube up sh integration we don t have a like health monitor for cri containerd containerd we should add one | 0 |
3,353 | 13,018,011,817 | IssuesEvent | 2020-07-26 15:18:09 | RapidField/solid-instruments | https://api.github.com/repos/RapidField/solid-instruments | closed | Fix 'Complex Method' issue in src\RapidField.SolidInstruments.Core\Extensions\StringExtensions.cs | Category-Maintenance Source-Maintainer Stage-4-Complete Subcategory-Conventions Subsystem-Core Tag-AddReleaseNote Verdict-Released Version-1.0.25 WindowForDelivery-2020-Q4 | # Maintenance Request
This issue represents a request for documentation, testing, refactoring or other non-functional changes.
## Overview
[CodeFactor](https://www.codefactor.io/repository/github/rapidfield/solid-instruments) found an issue: Complex Method
It's currently on:
[src\RapidField.SolidInstruments.Core\Extensions\StringExtensions.cs:1242-1383
](https://www.codefactor.io/repository/github/rapidfield/solid-instruments/source/master/src/RapidField.SolidInstruments.Core/Extensions/StringExtensions.cs#L1242)Commit 93a0cd4d6caba3732f36ad3ffa12fc08337971a0
## Statement of work
The following list describes the work to be done and defines acceptance criteria for the feature.
1. Resolve the complex method.
## Revision control plan
**Solid Instruments** uses the [**RapidField Revision Control Workflow**](https://github.com/RapidField/solid-instruments/blob/master/CONTRIBUTING.md#revision-control-strategy). Individual contributors should follow the branching plan below when working on this issue.
- `master` is the pull request target for
- `release/v1.0.25-preview1`, which is the pull request target for
- `develop`, which is the pull request target for
- `feature/0053_complex-string-extensions`, which is the pull request target for contributing user branches, which should be named using the pattern
- `user/{username}/0053_complex-string-extensions` | True | Fix 'Complex Method' issue in src\RapidField.SolidInstruments.Core\Extensions\StringExtensions.cs - # Maintenance Request
This issue represents a request for documentation, testing, refactoring or other non-functional changes.
## Overview
[CodeFactor](https://www.codefactor.io/repository/github/rapidfield/solid-instruments) found an issue: Complex Method
It's currently on:
[src\RapidField.SolidInstruments.Core\Extensions\StringExtensions.cs:1242-1383
](https://www.codefactor.io/repository/github/rapidfield/solid-instruments/source/master/src/RapidField.SolidInstruments.Core/Extensions/StringExtensions.cs#L1242)Commit 93a0cd4d6caba3732f36ad3ffa12fc08337971a0
## Statement of work
The following list describes the work to be done and defines acceptance criteria for the feature.
1. Resolve the complex method.
## Revision control plan
**Solid Instruments** uses the [**RapidField Revision Control Workflow**](https://github.com/RapidField/solid-instruments/blob/master/CONTRIBUTING.md#revision-control-strategy). Individual contributors should follow the branching plan below when working on this issue.
- `master` is the pull request target for
- `release/v1.0.25-preview1`, which is the pull request target for
- `develop`, which is the pull request target for
- `feature/0053_complex-string-extensions`, which is the pull request target for contributing user branches, which should be named using the pattern
- `user/{username}/0053_complex-string-extensions` | main | fix complex method issue in src rapidfield solidinstruments core extensions stringextensions cs maintenance request this issue represents a request for documentation testing refactoring or other non functional changes overview found an issue complex method it s currently on src rapidfield solidinstruments core extensions stringextensions cs statement of work the following list describes the work to be done and defines acceptance criteria for the feature resolve the complex method revision control plan solid instruments uses the individual contributors should follow the branching plan below when working on this issue master is the pull request target for release which is the pull request target for develop which is the pull request target for feature complex string extensions which is the pull request target for contributing user branches which should be named using the pattern user username complex string extensions | 1 |
3,164 | 12,226,509,108 | IssuesEvent | 2020-05-03 11:15:55 | gfleetwood/asteres | https://api.github.com/repos/gfleetwood/asteres | opened | jennybc/debugging (236074039) | R maintain | https://github.com/jennybc/debugging
Talk about general debugging strategies. How to be less confused and frustrated. | True | jennybc/debugging (236074039) - https://github.com/jennybc/debugging
Talk about general debugging strategies. How to be less confused and frustrated. | main | jennybc debugging talk about general debugging strategies how to be less confused and frustrated | 1 |
1,065 | 4,889,234,068 | IssuesEvent | 2016-11-18 09:31:30 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | include_role: privilege escalation (nested role) | affects_2.2 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/userdev/Documents/dlp-ansible/deploy_aqz/ansible.cfg
configured module search path = ['library']
```
(branch stable-2.2)
##### CONFIGURATION
local roles and libraries location
##### OS / ENVIRONMENT
Master: Ubuntu 16.04.2
Managed: Rhel 6.6
##### SUMMARY
privileges are not pass to nested role
##### STEPS TO REPRODUCE
```
- hosts: all
gather_facts: True
tasks:
- command: "whoami"
- include_role:
name: "role_test_a"
become: "yes"
become_user: "user2"
```
role_test_a/tasks/main.yml
```
---
- command: "whoami"
- include_role:
name: "role_test_b"
```
role_test_b/tasks/main.yml
```
---
- command: "whoami"
```
##### EXPECTED RESULTS
```
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [host]
TASK [command] *****************************************************************
changed: [host] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.002667", "end": "2016-10-27 09:37:19.915705", "rc": 0, "start": "2016-10-27 09:37:19.913038", "stderr": "", "stdout": "user1", "stdout_lines": ["user1"], "warnings": []}
TASK [role_test_a : command] ***************************************************
changed: [host] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.004548", "end": "2016-10-27 09:37:20.550015", "rc": 0, "start": "2016-10-27 09:37:20.545467", "stderr": "", "stdout": "user2", "stdout_lines": ["user2"], "warnings": []}
TASK [role_test_b : command] ***************************************************
changed: [host ] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.002869", "end": "2016-10-27 09:37:21.134721", "rc": 0, "start": "2016-10-27 09:37:21.131852", "stderr": "", "stdout": "user2", "stdout_lines": ["user2"], "warnings": []}
PLAY RECAP *********************************************************************
host : ok=7 changed=3 unreachable=0 failed=0
```
##### ACTUAL RESULTS
```
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [host]
TASK [command] *****************************************************************
changed: [host] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.002667", "end": "2016-10-27 09:37:19.915705", "rc": 0, "start": "2016-10-27 09:37:19.913038", "stderr": "", "stdout": "user1", "stdout_lines": ["user1"], "warnings": []}
TASK [role_test_a : command] ***************************************************
changed: [host] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.004548", "end": "2016-10-27 09:37:20.550015", "rc": 0, "start": "2016-10-27 09:37:20.545467", "stderr": "", "stdout": "user2", "stdout_lines": ["user2"], "warnings": []}
TASK [role_test_b : command] ***************************************************
changed: [host ] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.002869", "end": "2016-10-27 09:37:21.134721", "rc": 0, "start": "2016-10-27 09:37:21.131852", "stderr": "", "stdout": "user1", "stdout_lines": ["user1"], "warnings": []}
PLAY RECAP *********************************************************************
host : ok=7 changed=3 unreachable=0 failed=0
```
| True | include_role: privilege escalation (nested role) - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /home/userdev/Documents/dlp-ansible/deploy_aqz/ansible.cfg
configured module search path = ['library']
```
(branch stable-2.2)
##### CONFIGURATION
local roles and libraries location
##### OS / ENVIRONMENT
Master: Ubuntu 16.04.2
Managed: Rhel 6.6
##### SUMMARY
privileges are not pass to nested role
##### STEPS TO REPRODUCE
```
- hosts: all
gather_facts: True
tasks:
- command: "whoami"
- include_role:
name: "role_test_a"
become: "yes"
become_user: "user2"
```
role_test_a/tasks/main.yml
```
---
- command: "whoami"
- include_role:
name: "role_test_b"
```
role_test_b/tasks/main.yml
```
---
- command: "whoami"
```
##### EXPECTED RESULTS
```
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [host]
TASK [command] *****************************************************************
changed: [host] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.002667", "end": "2016-10-27 09:37:19.915705", "rc": 0, "start": "2016-10-27 09:37:19.913038", "stderr": "", "stdout": "user1", "stdout_lines": ["user1"], "warnings": []}
TASK [role_test_a : command] ***************************************************
changed: [host] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.004548", "end": "2016-10-27 09:37:20.550015", "rc": 0, "start": "2016-10-27 09:37:20.545467", "stderr": "", "stdout": "user2", "stdout_lines": ["user2"], "warnings": []}
TASK [role_test_b : command] ***************************************************
changed: [host ] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.002869", "end": "2016-10-27 09:37:21.134721", "rc": 0, "start": "2016-10-27 09:37:21.131852", "stderr": "", "stdout": "user2", "stdout_lines": ["user2"], "warnings": []}
PLAY RECAP *********************************************************************
host : ok=7 changed=3 unreachable=0 failed=0
```
##### ACTUAL RESULTS
```
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [host]
TASK [command] *****************************************************************
changed: [host] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.002667", "end": "2016-10-27 09:37:19.915705", "rc": 0, "start": "2016-10-27 09:37:19.913038", "stderr": "", "stdout": "user1", "stdout_lines": ["user1"], "warnings": []}
TASK [role_test_a : command] ***************************************************
changed: [host] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.004548", "end": "2016-10-27 09:37:20.550015", "rc": 0, "start": "2016-10-27 09:37:20.545467", "stderr": "", "stdout": "user2", "stdout_lines": ["user2"], "warnings": []}
TASK [role_test_b : command] ***************************************************
changed: [host ] => {"changed": true, "cmd": ["whoami"], "delta": "0:00:00.002869", "end": "2016-10-27 09:37:21.134721", "rc": 0, "start": "2016-10-27 09:37:21.131852", "stderr": "", "stdout": "user1", "stdout_lines": ["user1"], "warnings": []}
PLAY RECAP *********************************************************************
host : ok=7 changed=3 unreachable=0 failed=0
```
| main | include role privilege escalation nested role issue type bug report component name include role ansible version ansible config file home userdev documents dlp ansible deploy aqz ansible cfg configured module search path branch stable configuration local roles and libraries location os environment master ubuntu managed rhel summary privileges are not pass to nested role steps to reproduce hosts all gather facts true tasks command whoami include role name role test a become yes become user role test a tasks main yml command whoami include role name role test b role test b tasks main yml command whoami expected results play task ok task changed changed true cmd delta end rc start stderr stdout stdout lines warnings task changed changed true cmd delta end rc start stderr stdout stdout lines warnings task changed changed true cmd delta end rc start stderr stdout stdout lines warnings play recap host ok changed unreachable failed actual results play task ok task changed changed true cmd delta end rc start stderr stdout stdout lines warnings task changed changed true cmd delta end rc start stderr stdout stdout lines warnings task changed changed true cmd delta end rc start stderr stdout stdout lines warnings play recap host ok changed unreachable failed | 1 |
53,837 | 3,051,749,766 | IssuesEvent | 2015-08-12 10:38:35 | transientskp/tkp | https://api.github.com/repos/transientskp/tkp | closed | python-monetdb can't handle numpy datatypes | bug priority low | this problems keeps on coming back. I think it is good to add numpy datatype support to the python-monetdb package.
I add this issue here for myself, since I don't use the monetdb bug tracker and otherwise I may forget.
**Bart Scheers**: Probably fixed by http://dev.monetdb.org/hg/MonetDB/rev/088315943907
moved from LOFAR issue tracker:
https://support.astron.nl/lofar_issuetracker/issues/5944
| 1.0 | python-monetdb can't handle numpy datatypes - this problems keeps on coming back. I think it is good to add numpy datatype support to the python-monetdb package.
I add this issue here for myself, since I don't use the monetdb bug tracker and otherwise I may forget.
**Bart Scheers**: Probably fixed by http://dev.monetdb.org/hg/MonetDB/rev/088315943907
moved from LOFAR issue tracker:
https://support.astron.nl/lofar_issuetracker/issues/5944
| non_main | python monetdb can t handle numpy datatypes this problems keeps on coming back i think it is good to add numpy datatype support to the python monetdb package i add this issue here for myself since i don t use the monetdb bug tracker and otherwise i may forget bart scheers probably fixed by moved from lofar issue tracker | 0 |
265,332 | 23,160,823,087 | IssuesEvent | 2022-07-29 17:32:05 | modin-project/modin | https://api.github.com/repos/modin-project/modin | opened | TEST: windows ray CI: flaky segmentation fault and "Windows fatal exception: access violation" | CI Flaky Test | Here's an instance from `modin/pandas/test/dataframe/test_join_sort.py`: https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true
<details>
<summary>Stack trace</summary>
```
============================= test session starts =============================
platform win32 -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=[10](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:11) warmup=False warmup_iterations=100000)
rootdir: D:\a\modin\modin, configfile: setup.cfg
plugins: Faker-13.15.1, benchmark-3.4.1, cov-2.[11](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:12).0, forked-1.4.0, xdist-2.5.0
collected [12](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:13)998 items
Windows fatal exception: access violation
Thread 0x000019b8 (most recent call first):
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 1258 in channel_spin
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x000011a0 (most recent call first):
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 1258 in channel_spin
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x000007cc (most recent call first):
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 1258 in channel_spin
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x00001270 (most recent call first):
File "C:\Miniconda3\envs\modin\lib\threading.py", line 306 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line 106 in _wait_once
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line 148 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 733 in result
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 249 in _poll_locked
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 351 in poll
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py", line 475 in print_logs
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x000001e8 (most recent call first):
File "C:\Miniconda3\envs\modin\lib\threading.py", line 306 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line 106 in _wait_once
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line 148 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 733 in result
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 249 in _poll_locked
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 317 in poll
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py", line [13](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:14)89 in listen_error_messages
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x000005b4 (most recent call first):
File "C:\Miniconda3\envs\modin\lib\threading.py", line 306 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line 106 in _wait_once
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line [14](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:15)8 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 733 in result
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 249 in _poll_locked
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 385 in poll
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\import_thread.py", line 70 in _run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x00001b3c (most recent call first):
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py", line 364 in get_objects
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py", line 1825 in get
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\client_mode_hook.py", line 105 in wrapper
File "D:\a\modin\modin\modin\core\execution\ray\implementations\pandas_on_ray\partitioning\partition_manager.py", line 110 in get_objects_from_partitions
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\core\dataframe\pandas\partitioning\partition_manager.py", line 866 in get_indices
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 429 in _compute_axis_labels
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 2311 in <listcomp>
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 2310 in broadcast_apply_full_axis
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 1[15](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:16) in run_f_on_minimally_updated_metadata
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 1876 in apply_full_axis
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 115 in run_f_on_minimally_updated_metadata
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\core\storage_formats\pandas\query_compiler.py", line 505 in join
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\pandas\dataframe.py", line 1275 in join
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\pandas\test\dataframe\test_join_sort.py", line 111 in test_join
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\python.py", line 192 in pytest_pyfunc_call
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py", line 39 in _multicall
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py", line 80 in _hookexec
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py", line 265 in __call__
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\python.py", line 1761 in runtest
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line [16](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:17)6 in pytest_runtest_call
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py", line 39 in _multicall
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py", line 80 in _hookexec
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py", line 265 in __call__
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 259 in <lambda>
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 338 in from_call
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 258 in call_runtest_hook
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 219 in call_and_report
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 130 in runtestprotocol
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 111 in pytest_runtest_protocol
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py", line 39 in _multicall
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py", line 80 in _hookexec
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py", line 265 in __call__
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py", line 347 in pytest_runtestloop
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py", line 39 in _multicall
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py", line 80 in _hookexec
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py", line 265 in __call__
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py", line 322 in _main
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py", line 268 in wrap_session
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py", line 315 in pytest_cmdline_main
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py", line 39 in _multicall
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py", line 80 in _hookexec
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py", line 265 in __call__
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\config\__init__.py", line 164 in main
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\config\__init__.py", line [18](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:19)7 in console_main
File "C:\Miniconda3\envs\modin\lib\site-packages\pytest\__main__.py", line 5 in <module>
File "C:\Miniconda3\envs\modin\lib\runpy.py", line 87 in _run_code
File "C:\Miniconda3\envs\modin\lib\runpy.py", line [19](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:20)4 in _run_module_as_main
D:\a\_temp\16c1e1a0-adaf-4bff-9b1c-40fb0dbccb[25](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:26).sh: line 1: 10[32](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:33) Segmentation fault python -m pytest modin/pandas/test/dataframe/test_join_sort.py
modin\pandas\test\dataframe\test_join_sort.py ..
Error: Process completed with exit code 1[39](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:40).
```
</details> | 1.0 | TEST: windows ray CI: flaky segmentation fault and "Windows fatal exception: access violation" - Here's an instance from `modin/pandas/test/dataframe/test_join_sort.py`: https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true
<details>
<summary>Stack trace</summary>
```
============================= test session starts =============================
platform win32 -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=[10](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:11) warmup=False warmup_iterations=100000)
rootdir: D:\a\modin\modin, configfile: setup.cfg
plugins: Faker-13.15.1, benchmark-3.4.1, cov-2.[11](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:12).0, forked-1.4.0, xdist-2.5.0
collected [12](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:13)998 items
Windows fatal exception: access violation
Thread 0x000019b8 (most recent call first):
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 1258 in channel_spin
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x000011a0 (most recent call first):
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 1258 in channel_spin
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x000007cc (most recent call first):
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 1258 in channel_spin
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x00001270 (most recent call first):
File "C:\Miniconda3\envs\modin\lib\threading.py", line 306 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line 106 in _wait_once
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line 148 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 733 in result
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 249 in _poll_locked
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 351 in poll
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py", line 475 in print_logs
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x000001e8 (most recent call first):
File "C:\Miniconda3\envs\modin\lib\threading.py", line 306 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line 106 in _wait_once
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line 148 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 733 in result
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 249 in _poll_locked
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 317 in poll
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py", line [13](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:14)89 in listen_error_messages
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x000005b4 (most recent call first):
File "C:\Miniconda3\envs\modin\lib\threading.py", line 306 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line 106 in _wait_once
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_common.py", line [14](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:15)8 in wait
File "C:\Miniconda3\envs\modin\lib\site-packages\grpc\_channel.py", line 733 in result
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 249 in _poll_locked
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\gcs_pubsub.py", line 385 in poll
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\import_thread.py", line 70 in _run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 870 in run
File "C:\Miniconda3\envs\modin\lib\threading.py", line 932 in _bootstrap_inner
File "C:\Miniconda3\envs\modin\lib\threading.py", line 890 in _bootstrap
Thread 0x00001b3c (most recent call first):
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py", line 364 in get_objects
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\worker.py", line 1825 in get
File "C:\Miniconda3\envs\modin\lib\site-packages\ray\_private\client_mode_hook.py", line 105 in wrapper
File "D:\a\modin\modin\modin\core\execution\ray\implementations\pandas_on_ray\partitioning\partition_manager.py", line 110 in get_objects_from_partitions
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\core\dataframe\pandas\partitioning\partition_manager.py", line 866 in get_indices
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 429 in _compute_axis_labels
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 2311 in <listcomp>
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 2310 in broadcast_apply_full_axis
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 1[15](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:16) in run_f_on_minimally_updated_metadata
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 1876 in apply_full_axis
File "D:\a\modin\modin\modin\core\dataframe\pandas\dataframe\dataframe.py", line 115 in run_f_on_minimally_updated_metadata
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\core\storage_formats\pandas\query_compiler.py", line 505 in join
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\pandas\dataframe.py", line 1275 in join
File "D:\a\modin\modin\modin\logging\logger_decorator.py", line 128 in run_and_log
File "D:\a\modin\modin\modin\pandas\test\dataframe\test_join_sort.py", line 111 in test_join
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\python.py", line 192 in pytest_pyfunc_call
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py", line 39 in _multicall
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py", line 80 in _hookexec
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py", line 265 in __call__
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\python.py", line 1761 in runtest
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line [16](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:17)6 in pytest_runtest_call
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py", line 39 in _multicall
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py", line 80 in _hookexec
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py", line 265 in __call__
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 259 in <lambda>
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 338 in from_call
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 258 in call_runtest_hook
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 219 in call_and_report
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 130 in runtestprotocol
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\runner.py", line 111 in pytest_runtest_protocol
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py", line 39 in _multicall
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py", line 80 in _hookexec
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py", line 265 in __call__
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py", line 347 in pytest_runtestloop
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py", line 39 in _multicall
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py", line 80 in _hookexec
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py", line 265 in __call__
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py", line 322 in _main
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py", line 268 in wrap_session
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\main.py", line 315 in pytest_cmdline_main
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_callers.py", line 39 in _multicall
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_manager.py", line 80 in _hookexec
File "C:\Miniconda3\envs\modin\lib\site-packages\pluggy\_hooks.py", line 265 in __call__
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\config\__init__.py", line 164 in main
File "C:\Miniconda3\envs\modin\lib\site-packages\_pytest\config\__init__.py", line [18](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:19)7 in console_main
File "C:\Miniconda3\envs\modin\lib\site-packages\pytest\__main__.py", line 5 in <module>
File "C:\Miniconda3\envs\modin\lib\runpy.py", line 87 in _run_code
File "C:\Miniconda3\envs\modin\lib\runpy.py", line [19](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:20)4 in _run_module_as_main
D:\a\_temp\16c1e1a0-adaf-4bff-9b1c-40fb0dbccb[25](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:26).sh: line 1: 10[32](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:33) Segmentation fault python -m pytest modin/pandas/test/dataframe/test_join_sort.py
modin\pandas\test\dataframe\test_join_sort.py ..
Error: Process completed with exit code 1[39](https://github.com/modin-project/modin/runs/7581926651?check_suite_focus=true#step:6:40).
```
</details> | non_main | test windows ray ci flaky segmentation fault and windows fatal exception access violation here s an instance from modin pandas test dataframe test join sort py stack trace test session starts platform python pytest pluggy benchmark defaults timer time perf counter disable gc false min rounds min time max time calibration precision warmup false warmup iterations rootdir d a modin modin configfile setup cfg plugins faker benchmark cov forked xdist collected items windows fatal exception access violation thread most recent call first file c envs modin lib site packages grpc channel py line in channel spin file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib site packages grpc channel py line in channel spin file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib site packages grpc channel py line in channel spin file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib threading py line in wait file c envs modin lib site packages grpc common py line in wait once file c envs modin lib site packages grpc common py line in wait file c envs modin lib site packages grpc channel py line in result file c envs modin lib site packages ray private gcs pubsub py line in poll locked file c envs modin lib site packages ray private gcs pubsub py line in poll file c envs modin lib site packages ray worker py line in print logs file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib threading py line in wait file c envs modin lib site packages grpc common py line in wait once file c envs modin lib site packages grpc common py line in wait file c envs modin lib site packages grpc channel py line in result file c envs modin lib site packages ray private gcs pubsub py line in poll locked file c envs modin lib site packages ray private gcs pubsub py line in poll file c envs modin lib site packages ray worker py line in listen error messages file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib threading py line in wait file c envs modin lib site packages grpc common py line in wait once file c envs modin lib site packages grpc common py line in wait file c envs modin lib site packages grpc channel py line in result file c envs modin lib site packages ray private gcs pubsub py line in poll locked file c envs modin lib site packages ray private gcs pubsub py line in poll file c envs modin lib site packages ray private import thread py line in run file c envs modin lib threading py line in run file c envs modin lib threading py line in bootstrap inner file c envs modin lib threading py line in bootstrap thread most recent call first file c envs modin lib site packages ray worker py line in get objects file c envs modin lib site packages ray worker py line in get file c envs modin lib site packages ray private client mode hook py line in wrapper file d a modin modin modin core execution ray implementations pandas on ray partitioning partition manager py line in get objects from partitions file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin core dataframe pandas partitioning partition manager py line in get indices file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin core dataframe pandas dataframe dataframe py line in compute axis labels file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin core dataframe pandas dataframe dataframe py line in file d a modin modin modin core dataframe pandas dataframe dataframe py line in broadcast apply full axis file d a modin modin modin core dataframe pandas dataframe dataframe py line in run f on minimally updated metadata file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin core dataframe pandas dataframe dataframe py line in apply full axis file d a modin modin modin core dataframe pandas dataframe dataframe py line in run f on minimally updated metadata file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin core storage formats pandas query compiler py line in join file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin pandas dataframe py line in join file d a modin modin modin logging logger decorator py line in run and log file d a modin modin modin pandas test dataframe test join sort py line in test join file c envs modin lib site packages pytest python py line in pytest pyfunc call file c envs modin lib site packages pluggy callers py line in multicall file c envs modin lib site packages pluggy manager py line in hookexec file c envs modin lib site packages pluggy hooks py line in call file c envs modin lib site packages pytest python py line in runtest file c envs modin lib site packages pytest runner py line in pytest runtest call file c envs modin lib site packages pluggy callers py line in multicall file c envs modin lib site packages pluggy manager py line in hookexec file c envs modin lib site packages pluggy hooks py line in call file c envs modin lib site packages pytest runner py line in file c envs modin lib site packages pytest runner py line in from call file c envs modin lib site packages pytest runner py line in call runtest hook file c envs modin lib site packages pytest runner py line in call and report file c envs modin lib site packages pytest runner py line in runtestprotocol file c envs modin lib site packages pytest runner py line in pytest runtest protocol file c envs modin lib site packages pluggy callers py line in multicall file c envs modin lib site packages pluggy manager py line in hookexec file c envs modin lib site packages pluggy hooks py line in call file c envs modin lib site packages pytest main py line in pytest runtestloop file c envs modin lib site packages pluggy callers py line in multicall file c envs modin lib site packages pluggy manager py line in hookexec file c envs modin lib site packages pluggy hooks py line in call file c envs modin lib site packages pytest main py line in main file c envs modin lib site packages pytest main py line in wrap session file c envs modin lib site packages pytest main py line in pytest cmdline main file c envs modin lib site packages pluggy callers py line in multicall file c envs modin lib site packages pluggy manager py line in hookexec file c envs modin lib site packages pluggy hooks py line in call file c envs modin lib site packages pytest config init py line in main file c envs modin lib site packages pytest config init py line in console main file c envs modin lib site packages pytest main py line in file c envs modin lib runpy py line in run code file c envs modin lib runpy py line in run module as main d a temp adaf line segmentation fault python m pytest modin pandas test dataframe test join sort py modin pandas test dataframe test join sort py error process completed with exit code | 0 |
1,041 | 4,846,470,391 | IssuesEvent | 2016-11-10 11:53:02 | Particular/ServiceInsight | https://api.github.com/repos/Particular/ServiceInsight | closed | Help does not point to ServiceInsight page | Tag: Maintainer Prio Type: Bug | ServiceInsight Help goes to http://docs.particular.net but should be http://docs.particular.net/serviceinsight.
The PI and SC both deep link to the relevant doco, this should too.
| True | Help does not point to ServiceInsight page - ServiceInsight Help goes to http://docs.particular.net but should be http://docs.particular.net/serviceinsight.
The PI and SC both deep link to the relevant doco, this should too.
| main | help does not point to serviceinsight page serviceinsight help goes to but should be the pi and sc both deep link to the relevant doco this should too | 1 |
4,107 | 19,513,320,778 | IssuesEvent | 2021-12-29 04:56:04 | aws/aws-sam-cli-app-templates | https://api.github.com/repos/aws/aws-sam-cli-app-templates | closed | Just like NodeJs, create .NET Core 3.1 C# **Quick Start: SQS** | maintainer/need-response | It would be nice to have the same NodeJs Quick Start Templates for .NET Core 3.1 | True | Just like NodeJs, create .NET Core 3.1 C# **Quick Start: SQS** - It would be nice to have the same NodeJs Quick Start Templates for .NET Core 3.1 | main | just like nodejs create net core c quick start sqs it would be nice to have the same nodejs quick start templates for net core | 1 |
176,337 | 28,071,256,873 | IssuesEvent | 2023-03-29 19:15:36 | NattyNarwhal/Submariner | https://api.github.com/repos/NattyNarwhal/Submariner | closed | Get a new Big Sur style icon | enhancement design | I actually like the spirit of the current one (look like the hatch of an old submarine, 1000 leagues under the sea kinda vibes), though there are some issues like the actual musical note icon and the inner part inside of the hatch looking a bit weird. It'd be nice to make it a RoundRect too.
Notes from a friend:
> uhhh, the usual suspects to go to would be places like theiconfactory or louie mantia
> but also look at the artists who do the apollo ultra icons on /r/ApolloApp
> Christian is probably a good person to ask for this sort of thing | 1.0 | Get a new Big Sur style icon - I actually like the spirit of the current one (look like the hatch of an old submarine, 1000 leagues under the sea kinda vibes), though there are some issues like the actual musical note icon and the inner part inside of the hatch looking a bit weird. It'd be nice to make it a RoundRect too.
Notes from a friend:
> uhhh, the usual suspects to go to would be places like theiconfactory or louie mantia
> but also look at the artists who do the apollo ultra icons on /r/ApolloApp
> Christian is probably a good person to ask for this sort of thing | non_main | get a new big sur style icon i actually like the spirit of the current one look like the hatch of an old submarine leagues under the sea kinda vibes though there are some issues like the actual musical note icon and the inner part inside of the hatch looking a bit weird it d be nice to make it a roundrect too notes from a friend uhhh the usual suspects to go to would be places like theiconfactory or louie mantia but also look at the artists who do the apollo ultra icons on r apolloapp christian is probably a good person to ask for this sort of thing | 0 |
458 | 3,636,010,678 | IssuesEvent | 2016-02-12 00:24:04 | antigenomics/vdjdb-db | https://api.github.com/repos/antigenomics/vdjdb-db | closed | Combine extra columns to a single "comment" column | maintainance | - Covert table of additional columns to a single column with JSON data. For example
tissue | cell type
--------|-----------
``spleen`` | ``cd8``
changes to
comment|
--------|
``{ "tissue":"spleen", "cell type":"cd8" }``|
- Do it automatically upon database assembly. | True | Combine extra columns to a single "comment" column - - Covert table of additional columns to a single column with JSON data. For example
tissue | cell type
--------|-----------
``spleen`` | ``cd8``
changes to
comment|
--------|
``{ "tissue":"spleen", "cell type":"cd8" }``|
- Do it automatically upon database assembly. | main | combine extra columns to a single comment column covert table of additional columns to a single column with json data for example tissue cell type spleen changes to comment tissue spleen cell type do it automatically upon database assembly | 1 |
288,150 | 8,826,784,585 | IssuesEvent | 2019-01-03 04:56:53 | Pigmice2733/peregrine-backend | https://api.github.com/repos/Pigmice2733/peregrine-backend | closed | Refresh tokens | low priority | We should add refresh tokens. This doesn't need to happen until after we get everything else done. | 1.0 | Refresh tokens - We should add refresh tokens. This doesn't need to happen until after we get everything else done. | non_main | refresh tokens we should add refresh tokens this doesn t need to happen until after we get everything else done | 0 |
3,877 | 17,172,356,398 | IssuesEvent | 2021-07-15 07:03:40 | arcticicestudio/nord-vim | https://api.github.com/repos/arcticicestudio/nord-vim | closed | Possible conflicting file names for lightline theme | context-config context-plugin-support scope-compatibility scope-maintainability scope-ux status-reproduction type-improvement | itchyny/lightline.vim#488
As we tested, removing the file `autoload/lightline/colorscheme/nord.vim` in `nord-vim` solves this issue. | True | Possible conflicting file names for lightline theme - itchyny/lightline.vim#488
As we tested, removing the file `autoload/lightline/colorscheme/nord.vim` in `nord-vim` solves this issue. | main | possible conflicting file names for lightline theme itchyny lightline vim as we tested removing the file autoload lightline colorscheme nord vim in nord vim solves this issue | 1 |
1,193 | 5,109,564,624 | IssuesEvent | 2017-01-05 21:12:31 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | ec2_asg_facts not gathering all ASG's | affects_2.3 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_asg_facts
##### ANSIBLE VERSION
ansible 2.3.0 - devel branch
Also present in 2.2.0 rc1
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
OSX 10.11.5
##### SUMMARY
When running a ec2_asg_facts it does not fetch all ASGs that are in the account
##### STEPS TO REPRODUCE
Difficult - you will need quite a few ASG's. It looks as though the golden number is 51.
Ansible will go off and happily describe the first 50 ASG but completely ignore the 51st. They are reported back in alphabetical order.
I have done some limited debugging of lib/ansible/extras/cloud/amazon/ecs_asg_facts.py.
Add `print asgs` between line 297 and 298 and the module will fail after printing 50 instances.
` - ec2_asg_facts:
profile: "{{ profile }}"
region: "{{ region }}"
register: current_instances
- debug: msg="{{current_instances}}"`
This becomes particualry problematic when adding a name to the above as it will still only get the first 50 ASG's
##### EXPECTED RESULTS
I would expect it to describe all ASG's
| True | ec2_asg_facts not gathering all ASG's - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2_asg_facts
##### ANSIBLE VERSION
ansible 2.3.0 - devel branch
Also present in 2.2.0 rc1
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
OSX 10.11.5
##### SUMMARY
When running a ec2_asg_facts it does not fetch all ASGs that are in the account
##### STEPS TO REPRODUCE
Difficult - you will need quite a few ASG's. It looks as though the golden number is 51.
Ansible will go off and happily describe the first 50 ASG but completely ignore the 51st. They are reported back in alphabetical order.
I have done some limited debugging of lib/ansible/extras/cloud/amazon/ecs_asg_facts.py.
Add `print asgs` between line 297 and 298 and the module will fail after printing 50 instances.
` - ec2_asg_facts:
profile: "{{ profile }}"
region: "{{ region }}"
register: current_instances
- debug: msg="{{current_instances}}"`
This becomes particualry problematic when adding a name to the above as it will still only get the first 50 ASG's
##### EXPECTED RESULTS
I would expect it to describe all ASG's
| main | asg facts not gathering all asg s issue type bug report component name asg facts ansible version ansible devel branch also present in configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment osx summary when running a asg facts it does not fetch all asgs that are in the account steps to reproduce difficult you will need quite a few asg s it looks as though the golden number is ansible will go off and happily describe the first asg but completely ignore the they are reported back in alphabetical order i have done some limited debugging of lib ansible extras cloud amazon ecs asg facts py add print asgs between line and and the module will fail after printing instances asg facts profile profile region region register current instances debug msg current instances this becomes particualry problematic when adding a name to the above as it will still only get the first asg s expected results i would expect it to describe all asg s | 1 |
4,205 | 20,614,429,156 | IssuesEvent | 2022-03-07 11:49:02 | Lissy93/dashy | https://api.github.com/repos/Lissy93/dashy | closed | Build fails | 🐛 Bug 👤 Awaiting Maintainer Response | ### Environment
Self-Hosted (Docker)
### Version
V-2.0.4.
### Describe the problem
`docker exec -it Dashy yarn build` fails in V 2.0.4.
image: lissy93/dashy
/app/src dir is mostly empty
```sh
ERROR Failed to compile with 1 error 9:00:02 AM
This relative module was not found:
* ./src/main.js in multi ./src/main.js
```
### Additional info


### Please tick the boxes
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number)
- [X] You've checked that this [issue hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct) | True | Build fails - ### Environment
Self-Hosted (Docker)
### Version
V-2.0.4.
### Describe the problem
`docker exec -it Dashy yarn build` fails in V 2.0.4.
image: lissy93/dashy
/app/src dir is mostly empty
```sh
ERROR Failed to compile with 1 error 9:00:02 AM
This relative module was not found:
* ./src/main.js in multi ./src/main.js
```
### Additional info


### Please tick the boxes
- [X] You are using a [supported](https://github.com/Lissy93/dashy/blob/master/.github/SECURITY.md#supported-versions) version of Dashy (check the first two digits of the version number)
- [X] You've checked that this [issue hasn't already been raised](https://github.com/Lissy93/dashy/issues?q=is%3Aissue)
- [X] You've checked the [docs](https://github.com/Lissy93/dashy/tree/master/docs#readme) and [troubleshooting](https://github.com/Lissy93/dashy/blob/master/docs/troubleshooting.md#troubleshooting) guide
- [X] You agree to the [code of conduct](https://github.com/Lissy93/dashy/blob/master/.github/CODE_OF_CONDUCT.md#contributor-covenant-code-of-conduct) | main | build fails environment self hosted docker version v describe the problem docker exec it dashy yarn build fails in v image dashy app src dir is mostly empty sh error failed to compile with error am this relative module was not found src main js in multi src main js additional info please tick the boxes you are using a version of dashy check the first two digits of the version number you ve checked that this you ve checked the and guide you agree to the | 1 |
827 | 4,461,671,472 | IssuesEvent | 2016-08-24 06:54:45 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | apk module silently ignores `name` argument if `upgrade` is specified | bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
- Documentation Report
##### COMPONENT NAME
apk
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
`ansible-container`
##### OS / ENVIRONMENT
`ansible-container`, `python:2-alpine`
##### SUMMARY
If you use the apk module and specify arguments for both `name` and `upgrade`, the package named `name` is not installed, and ansible reports success for the task.
##### STEPS TO REPRODUCE
```
- apk: upgrade=yes update_cache=yes name=python3
```
##### EXPECTED RESULTS
The package specified by `name` should either be installed, or ansible should flop with a syntax error.
##### ACTUAL RESULTS
Ansible pretends everything is peachy but silently ignored the instruction to install the `name`d package. | True | apk module silently ignores `name` argument if `upgrade` is specified - ##### ISSUE TYPE
- Bug Report
- Documentation Report
##### COMPONENT NAME
apk
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
`ansible-container`
##### OS / ENVIRONMENT
`ansible-container`, `python:2-alpine`
##### SUMMARY
If you use the apk module and specify arguments for both `name` and `upgrade`, the package named `name` is not installed, and ansible reports success for the task.
##### STEPS TO REPRODUCE
```
- apk: upgrade=yes update_cache=yes name=python3
```
##### EXPECTED RESULTS
The package specified by `name` should either be installed, or ansible should flop with a syntax error.
##### ACTUAL RESULTS
Ansible pretends everything is peachy but silently ignored the instruction to install the `name`d package. | main | apk module silently ignores name argument if upgrade is specified issue type bug report documentation report component name apk ansible version ansible config file configured module search path default w o overrides configuration ansible container os environment ansible container python alpine summary if you use the apk module and specify arguments for both name and upgrade the package named name is not installed and ansible reports success for the task steps to reproduce apk upgrade yes update cache yes name expected results the package specified by name should either be installed or ansible should flop with a syntax error actual results ansible pretends everything is peachy but silently ignored the instruction to install the name d package | 1 |
248,730 | 7,935,386,442 | IssuesEvent | 2018-07-09 04:48:19 | cilium/cilium | https://api.github.com/repos/cilium/cilium | closed | Measure endpoint and policy computation time | area/metrics area/policy kind/enhancement priority/high | Similar to how we measure `buildDuration` in `regenerateBPF()`, it would be nice to also measure the time to compute policy. For reference:
https://github.com/cilium/cilium/blob/91287b9f0665ddd22fa4933eece6ae276e4157af/pkg/endpoint/bpf.go#L538-L548
This could be something as simple as adding timers and extending the debug logs.
* [ ] Measure `regeneratePolicy()` (Full policy regeneration)
* [ ] Measure `regenerate()` (Full endpoint regeneration)
* [ ] Measure `updateNetworkPolicy()` (Proxy policy calculation) | 1.0 | Measure endpoint and policy computation time - Similar to how we measure `buildDuration` in `regenerateBPF()`, it would be nice to also measure the time to compute policy. For reference:
https://github.com/cilium/cilium/blob/91287b9f0665ddd22fa4933eece6ae276e4157af/pkg/endpoint/bpf.go#L538-L548
This could be something as simple as adding timers and extending the debug logs.
* [ ] Measure `regeneratePolicy()` (Full policy regeneration)
* [ ] Measure `regenerate()` (Full endpoint regeneration)
* [ ] Measure `updateNetworkPolicy()` (Proxy policy calculation) | non_main | measure endpoint and policy computation time similar to how we measure buildduration in regeneratebpf it would be nice to also measure the time to compute policy for reference this could be something as simple as adding timers and extending the debug logs measure regeneratepolicy full policy regeneration measure regenerate full endpoint regeneration measure updatenetworkpolicy proxy policy calculation | 0 |
24,341 | 4,075,818,829 | IssuesEvent | 2016-05-29 13:40:00 | IDgis/geoportaal-test | https://api.github.com/repos/IDgis/geoportaal-test | closed | bij download een naam.xls of leesmij.xls meeleveren | gebruikerstest wens | nu wordt de metainfo ontsloten via een xls die het id in zich heeft, beter is het om hiervoor de naam van het gedownloade bestand te gebruiken.
In het onderstaande voorbeeld:
U kunt bestand B0_6_cijferig_postcodebestand.zip downloaden via de volgende link:
https://acc.geoportaaloverijssel.nl/download-result/614572a0-34de-4a43-83a4-6fd2227c15fd.
wordt de xls dus: B0_6_cijferig_postcodebestand.xls
als het makkelijker is, dan is een leesmij.xls ook een goed alternatief
dan begrijpen mensen duidelijker, dat hier meer info te vinden is
nb: de xls staat wel op de juiste plek in de zip bji het betreffende bestand, wat is gedownload.
locatie is ok, naam kan beter | 1.0 | bij download een naam.xls of leesmij.xls meeleveren - nu wordt de metainfo ontsloten via een xls die het id in zich heeft, beter is het om hiervoor de naam van het gedownloade bestand te gebruiken.
In het onderstaande voorbeeld:
U kunt bestand B0_6_cijferig_postcodebestand.zip downloaden via de volgende link:
https://acc.geoportaaloverijssel.nl/download-result/614572a0-34de-4a43-83a4-6fd2227c15fd.
wordt de xls dus: B0_6_cijferig_postcodebestand.xls
als het makkelijker is, dan is een leesmij.xls ook een goed alternatief
dan begrijpen mensen duidelijker, dat hier meer info te vinden is
nb: de xls staat wel op de juiste plek in de zip bji het betreffende bestand, wat is gedownload.
locatie is ok, naam kan beter | non_main | bij download een naam xls of leesmij xls meeleveren nu wordt de metainfo ontsloten via een xls die het id in zich heeft beter is het om hiervoor de naam van het gedownloade bestand te gebruiken in het onderstaande voorbeeld u kunt bestand cijferig postcodebestand zip downloaden via de volgende link wordt de xls dus cijferig postcodebestand xls als het makkelijker is dan is een leesmij xls ook een goed alternatief dan begrijpen mensen duidelijker dat hier meer info te vinden is nb de xls staat wel op de juiste plek in de zip bji het betreffende bestand wat is gedownload locatie is ok naam kan beter | 0 |
447,912 | 31,725,878,339 | IssuesEvent | 2023-09-10 23:20:43 | b-rodrigues/rix | https://api.github.com/repos/b-rodrigues/rix | closed | README: new "From quickstart to deep dive" top section | documentation enhancement | note to myself: To have a quick entry section for code-affine people. Later will show code of bootstrap helper like `make_launcher()`. For now it will be installing {rix} and bumping Rix R env straight-forwardly. This means moving small parts in README. | 1.0 | README: new "From quickstart to deep dive" top section - note to myself: To have a quick entry section for code-affine people. Later will show code of bootstrap helper like `make_launcher()`. For now it will be installing {rix} and bumping Rix R env straight-forwardly. This means moving small parts in README. | non_main | readme new from quickstart to deep dive top section note to myself to have a quick entry section for code affine people later will show code of bootstrap helper like make launcher for now it will be installing rix and bumping rix r env straight forwardly this means moving small parts in readme | 0 |
699 | 4,270,973,478 | IssuesEvent | 2016-07-13 09:21:34 | Particular/NServiceBus.RabbitMQ | https://api.github.com/repos/Particular/NServiceBus.RabbitMQ | opened | Update release-6.0.0 to NSB 6.0.0-beta0006 | Impact: L Size: S Tag: Maintainer Prio | This beta contains breaking changes wrt recoverability and routing.
<!-- Connects to https://github.com/Particular/V6Launch/issues/69 --> | True | Update release-6.0.0 to NSB 6.0.0-beta0006 - This beta contains breaking changes wrt recoverability and routing.
<!-- Connects to https://github.com/Particular/V6Launch/issues/69 --> | main | update release to nsb this beta contains breaking changes wrt recoverability and routing | 1 |
248,380 | 18,858,067,963 | IssuesEvent | 2021-11-12 09:20:57 | KishorKumar11/pe | https://api.github.com/repos/KishorKumar11/pe | opened | Documentation Bug - Documentation never states that there should spaces between keywords | type.DocumentationBug severity.Low | *Applies to all commands*
input: set dish limit [dish_name]
Users might add whitespaces here and there between keyword by mistake (as shown in input). There should be an enforcing statement to the user in UG to avoid this problem.

<!--session: 1636703041002-f7e6ddbb-8a5f-4883-9815-9cc7b66645b6-->
<!--Version: Web v3.4.1--> | 1.0 | Documentation Bug - Documentation never states that there should spaces between keywords - *Applies to all commands*
input: set dish limit [dish_name]
Users might add whitespaces here and there between keyword by mistake (as shown in input). There should be an enforcing statement to the user in UG to avoid this problem.

<!--session: 1636703041002-f7e6ddbb-8a5f-4883-9815-9cc7b66645b6-->
<!--Version: Web v3.4.1--> | non_main | documentation bug documentation never states that there should spaces between keywords applies to all commands input set dish limit users might add whitespaces here and there between keyword by mistake as shown in input there should be an enforcing statement to the user in ug to avoid this problem | 0 |
271 | 3,039,009,665 | IssuesEvent | 2015-08-07 04:36:57 | DotNetAnalyzers/StyleCopAnalyzers | https://api.github.com/repos/DotNetAnalyzers/StyleCopAnalyzers | opened | New rule proposal: Store files as UTF-8 with byte order mark | maintainability needs discussion new rule proposal | **Category:** Maintainability
**Name:** Store files as UTF-8 with byte order mark
**Description:** Source files should be saved using the UTF-8 encoding with a byte order mark
**Rationale:** Storing files in this encoding ensures that the files are always treated the same way by the compiler, even when compiled on systems with varying default system encodings. In addition, this encoding is the most widely supported encoding to features like visual diffs on GitHub and other tooling.
Related issue reports:
* https://github.com/dotnet/roslyn/pull/479#issuecomment-74431004
* dotnet/roslyn#4022
* dotnet/roslyn#4222
* dotnet/roslyn#4264
* dotnet/roslyn#4255
* dotnet/roslyn#4298
* dotnet/corefx#2346 | True | New rule proposal: Store files as UTF-8 with byte order mark - **Category:** Maintainability
**Name:** Store files as UTF-8 with byte order mark
**Description:** Source files should be saved using the UTF-8 encoding with a byte order mark
**Rationale:** Storing files in this encoding ensures that the files are always treated the same way by the compiler, even when compiled on systems with varying default system encodings. In addition, this encoding is the most widely supported encoding to features like visual diffs on GitHub and other tooling.
Related issue reports:
* https://github.com/dotnet/roslyn/pull/479#issuecomment-74431004
* dotnet/roslyn#4022
* dotnet/roslyn#4222
* dotnet/roslyn#4264
* dotnet/roslyn#4255
* dotnet/roslyn#4298
* dotnet/corefx#2346 | main | new rule proposal store files as utf with byte order mark category maintainability name store files as utf with byte order mark description source files should be saved using the utf encoding with a byte order mark rationale storing files in this encoding ensures that the files are always treated the same way by the compiler even when compiled on systems with varying default system encodings in addition this encoding is the most widely supported encoding to features like visual diffs on github and other tooling related issue reports dotnet roslyn dotnet roslyn dotnet roslyn dotnet roslyn dotnet roslyn dotnet corefx | 1 |
763,381 | 26,754,771,178 | IssuesEvent | 2023-01-30 22:54:39 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | opened | Solana provider renderer crash | priority/P1 OS/Desktop feature/web3/wallet/solana feature/web3/wallet/dapps | da590600-668b-8709-0000-000000000000
795a0600-668b-8709-0000-000000000000
7a5a0600-668b-8709-0000-000000000000
```
[ 00 ] brave_wallet::JSSolanaProvider::OnIsSolanaKeyringCreated(bool) ( render_frame_impl.cc:2299 )
[ 01 ] brave_wallet::JSSolanaProvider::OnIsSolanaKeyringCreated(bool) ( js_solana_provider.cc:1000 )
[ 02 ] network::mojom::CookieManager_DeleteCanonicalCookie_ForwardToCallback::Accept(mojo::Message*) ( callback.h:152 )
[ 03 ] mojo::InterfaceEndpointClient::HandleValidatedMessage(mojo::Message*) ( interface_endpoint_client.cc:1002 )
[ 04 ] mojo::internal::MultiplexRouter::Accept(mojo::Message*) ( message_dispatcher.cc:43 )
[ 05 ] mojo::MessageDispatcher::Accept(mojo::Message*) ( message_dispatcher.cc:43 )
[ 06 ] base::internal::Invoker<base::internal::BindState<void (mojo::Connector::)(unsigned int), base::internal::UnretainedWrapper<mojo::Connector, base::RawPtrBanDanglingIfSupported>>, void (unsigned int)>::Run(base::internal::BindStateBase, unsigned int) ( connector.cc:542 )
[ 07 ] base::internal::Invoker<base::internal::BindState<void (mojo::SimpleWatcher::)(int, unsigned int, mojo::HandleSignalsState const&), base::WeakPtr<mojo::SimpleWatcher>, int, unsigned int, mojo::HandleSignalsState>, void ()>::RunOnce(base::internal::BindStateBase) ( callback.h:333 )
[ 08 ] non-virtual thunk to base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::DoWork() ( callback.h:152 )
[ 09 ] base::MessagePumpCFRunLoopBase::RunWork() ( message_pump_mac.mm:475 )
[ 10 ] base::mac::CallWithEHFrame(void () block_pointer)
[ 11 ] base::MessagePumpCFRunLoopBase::RunWorkSource(void*) ( message_pump_mac.mm:447 )
[ 12 ] 0x1ac7a9a30
[ 13 ] 0x1ac7a99c4
[ 14 ] 0x1ac7a9734
[ 15 ] 0x1ac7a8338
[ 16 ] 0x1ac7a78a0
[ 17 ] 0x1ad6afe54
[ 18 ] base::MessagePumpNSRunLoop::DoRun(base::MessagePump::Delegate*) ( message_pump_mac.mm:768 )
[ 19 ] base::MessagePumpCFRunLoopBase::Run(base::MessagePump::Delegate*) ( message_pump_mac.mm:172 )
[ 20 ] base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run(bool, base::TimeDelta) ( thread_controller_with_message_pump_impl.cc:644 )
[ 21 ] base::RunLoop::Run(base::Location const&) ( run_loop.cc:0 )
[ 22 ] content::RendererMain(content::MainFunctionParams) ( renderer_main.cc:330 )
[ 23 ] content::RunOtherNamedProcessTypeMain(std::Cr::basic_string<char, std::Cr::char_traits<char>, std::Cr::allocator<char>> const&, content::MainFunctionParams, content::ContentMainDelegate*) ( content_main_runner_impl.cc:746 )
[ 24 ] content::ContentMainRunnerImpl::Run() ( content_main_runner_impl.cc:1100 )
[ 25 ] content::RunContentProcess(content::ContentMainParams, content::ContentMainRunner*) ( content_main.cc:344 )
[ 26 ] content::ContentMain(content::ContentMainParams) ( content_main.cc:372 )
[ 27 ] ChromeMain ( chrome_main.cc:174 )
[ 28 ] main ( chrome_exe_main_mac.cc:216 )
[ 29 ] 0x1ac39fe4c
``` | 1.0 | Solana provider renderer crash - da590600-668b-8709-0000-000000000000
795a0600-668b-8709-0000-000000000000
7a5a0600-668b-8709-0000-000000000000
```
[ 00 ] brave_wallet::JSSolanaProvider::OnIsSolanaKeyringCreated(bool) ( render_frame_impl.cc:2299 )
[ 01 ] brave_wallet::JSSolanaProvider::OnIsSolanaKeyringCreated(bool) ( js_solana_provider.cc:1000 )
[ 02 ] network::mojom::CookieManager_DeleteCanonicalCookie_ForwardToCallback::Accept(mojo::Message*) ( callback.h:152 )
[ 03 ] mojo::InterfaceEndpointClient::HandleValidatedMessage(mojo::Message*) ( interface_endpoint_client.cc:1002 )
[ 04 ] mojo::internal::MultiplexRouter::Accept(mojo::Message*) ( message_dispatcher.cc:43 )
[ 05 ] mojo::MessageDispatcher::Accept(mojo::Message*) ( message_dispatcher.cc:43 )
[ 06 ] base::internal::Invoker<base::internal::BindState<void (mojo::Connector::)(unsigned int), base::internal::UnretainedWrapper<mojo::Connector, base::RawPtrBanDanglingIfSupported>>, void (unsigned int)>::Run(base::internal::BindStateBase, unsigned int) ( connector.cc:542 )
[ 07 ] base::internal::Invoker<base::internal::BindState<void (mojo::SimpleWatcher::)(int, unsigned int, mojo::HandleSignalsState const&), base::WeakPtr<mojo::SimpleWatcher>, int, unsigned int, mojo::HandleSignalsState>, void ()>::RunOnce(base::internal::BindStateBase) ( callback.h:333 )
[ 08 ] non-virtual thunk to base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::DoWork() ( callback.h:152 )
[ 09 ] base::MessagePumpCFRunLoopBase::RunWork() ( message_pump_mac.mm:475 )
[ 10 ] base::mac::CallWithEHFrame(void () block_pointer)
[ 11 ] base::MessagePumpCFRunLoopBase::RunWorkSource(void*) ( message_pump_mac.mm:447 )
[ 12 ] 0x1ac7a9a30
[ 13 ] 0x1ac7a99c4
[ 14 ] 0x1ac7a9734
[ 15 ] 0x1ac7a8338
[ 16 ] 0x1ac7a78a0
[ 17 ] 0x1ad6afe54
[ 18 ] base::MessagePumpNSRunLoop::DoRun(base::MessagePump::Delegate*) ( message_pump_mac.mm:768 )
[ 19 ] base::MessagePumpCFRunLoopBase::Run(base::MessagePump::Delegate*) ( message_pump_mac.mm:172 )
[ 20 ] base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run(bool, base::TimeDelta) ( thread_controller_with_message_pump_impl.cc:644 )
[ 21 ] base::RunLoop::Run(base::Location const&) ( run_loop.cc:0 )
[ 22 ] content::RendererMain(content::MainFunctionParams) ( renderer_main.cc:330 )
[ 23 ] content::RunOtherNamedProcessTypeMain(std::Cr::basic_string<char, std::Cr::char_traits<char>, std::Cr::allocator<char>> const&, content::MainFunctionParams, content::ContentMainDelegate*) ( content_main_runner_impl.cc:746 )
[ 24 ] content::ContentMainRunnerImpl::Run() ( content_main_runner_impl.cc:1100 )
[ 25 ] content::RunContentProcess(content::ContentMainParams, content::ContentMainRunner*) ( content_main.cc:344 )
[ 26 ] content::ContentMain(content::ContentMainParams) ( content_main.cc:372 )
[ 27 ] ChromeMain ( chrome_main.cc:174 )
[ 28 ] main ( chrome_exe_main_mac.cc:216 )
[ 29 ] 0x1ac39fe4c
``` | non_main | solana provider renderer crash brave wallet jssolanaprovider onissolanakeyringcreated bool render frame impl cc brave wallet jssolanaprovider onissolanakeyringcreated bool js solana provider cc network mojom cookiemanager deletecanonicalcookie forwardtocallback accept mojo message callback h mojo interfaceendpointclient handlevalidatedmessage mojo message interface endpoint client cc mojo internal multiplexrouter accept mojo message message dispatcher cc mojo messagedispatcher accept mojo message message dispatcher cc base internal invoker void unsigned int run base internal bindstatebase unsigned int connector cc base internal invoker int unsigned int mojo handlesignalsstate void runonce base internal bindstatebase callback h non virtual thunk to base sequence manager internal threadcontrollerwithmessagepumpimpl dowork callback h base messagepumpcfrunloopbase runwork message pump mac mm base mac callwithehframe void block pointer base messagepumpcfrunloopbase runworksource void message pump mac mm base messagepumpnsrunloop dorun base messagepump delegate message pump mac mm base messagepumpcfrunloopbase run base messagepump delegate message pump mac mm base sequence manager internal threadcontrollerwithmessagepumpimpl run bool base timedelta thread controller with message pump impl cc base runloop run base location const run loop cc content renderermain content mainfunctionparams renderer main cc content runothernamedprocesstypemain std cr basic string std cr allocator const content mainfunctionparams content contentmaindelegate content main runner impl cc content contentmainrunnerimpl run content main runner impl cc content runcontentprocess content contentmainparams content contentmainrunner content main cc content contentmain content contentmainparams content main cc chromemain chrome main cc main chrome exe main mac cc | 0 |
185,632 | 14,362,685,702 | IssuesEvent | 2020-11-30 20:14:04 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] PermissionPrecedenceTests.testDifferentCombinationsOfIndices failed | :Security/Security >test-failure Team:Security | The test `PermissionPrecedenceTests.testDifferentCombinationsOfIndices` failed today on CI on 7.5 branch:
```
java.lang.AssertionError
:
All incoming requests on node [node_s1] should have finished. Expected 0 but got 322
Expected: <0L>
but: was <322L>
```
The logs show the following warning:
```
[2020-02-06T20:28:14,781][WARN ][o.e.t.TcpTransport ] [node_s2] exception caught on transport layer [TcpNioSocketChannel{localAddress=0.0.0.0/0.0.0.0:50738, remoteAddress=null}], closing connection
javax.net.ssl.SSLException: Closed engine without receiving the close alert message.
at org.elasticsearch.xpack.security.transport.nio.SSLDriver.close(SSLDriver.java:165) ~[main/:?]
at org.elasticsearch.core.internal.io.IOUtils.close(IOUtils.java:104) ~[elasticsearch-core-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.core.internal.io.IOUtils.close(IOUtils.java:62) ~[elasticsearch-core-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.xpack.security.transport.nio.SSLChannelContext.closeFromSelector(SSLChannelContext.java:206) ~[main/:?]
at org.elasticsearch.nio.EventHandler.handleClose(EventHandler.java:240) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.closeChannel(NioSelector.java:480) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.queueChannelClose(NioSelector.java:311) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.xpack.security.transport.nio.SSLChannelContext.channelCloseTimeout(SSLChannelContext.java:217) [main/:?]
at org.elasticsearch.nio.EventHandler.handleTask(EventHandler.java:178) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.handleTask(NioSelector.java:274) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.handleScheduledTasks(NioSelector.java:268) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:184) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:131) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at java.lang.Thread.run(Thread.java:835) [?:?]
```
It does not reproduce with:
```
./gradlew ':x-pack:plugin:security:test' --tests "org.elasticsearch.integration.PermissionPrecedenceTests.testDifferentCombinationsOfIndices" \
-Dtests.seed=B9AF19744C8BB5ED \
-Dtests.security.manager=true \
-Dtests.locale=hr-HR \
-Dtests.timezone=Australia/Currie \
-Dcompiler.java=12 \
-Druntime.java=12
```
Build scan is https://gradle-enterprise.elastic.co/s/5f3u5v42akdmu | 1.0 | [CI] PermissionPrecedenceTests.testDifferentCombinationsOfIndices failed - The test `PermissionPrecedenceTests.testDifferentCombinationsOfIndices` failed today on CI on 7.5 branch:
```
java.lang.AssertionError
:
All incoming requests on node [node_s1] should have finished. Expected 0 but got 322
Expected: <0L>
but: was <322L>
```
The logs show the following warning:
```
[2020-02-06T20:28:14,781][WARN ][o.e.t.TcpTransport ] [node_s2] exception caught on transport layer [TcpNioSocketChannel{localAddress=0.0.0.0/0.0.0.0:50738, remoteAddress=null}], closing connection
javax.net.ssl.SSLException: Closed engine without receiving the close alert message.
at org.elasticsearch.xpack.security.transport.nio.SSLDriver.close(SSLDriver.java:165) ~[main/:?]
at org.elasticsearch.core.internal.io.IOUtils.close(IOUtils.java:104) ~[elasticsearch-core-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.core.internal.io.IOUtils.close(IOUtils.java:62) ~[elasticsearch-core-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.xpack.security.transport.nio.SSLChannelContext.closeFromSelector(SSLChannelContext.java:206) ~[main/:?]
at org.elasticsearch.nio.EventHandler.handleClose(EventHandler.java:240) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.closeChannel(NioSelector.java:480) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.queueChannelClose(NioSelector.java:311) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.xpack.security.transport.nio.SSLChannelContext.channelCloseTimeout(SSLChannelContext.java:217) [main/:?]
at org.elasticsearch.nio.EventHandler.handleTask(EventHandler.java:178) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.handleTask(NioSelector.java:274) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.handleScheduledTasks(NioSelector.java:268) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:184) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:131) [elasticsearch-nio-7.5.3-SNAPSHOT.jar:7.5.3-SNAPSHOT]
at java.lang.Thread.run(Thread.java:835) [?:?]
```
It does not reproduce with:
```
./gradlew ':x-pack:plugin:security:test' --tests "org.elasticsearch.integration.PermissionPrecedenceTests.testDifferentCombinationsOfIndices" \
-Dtests.seed=B9AF19744C8BB5ED \
-Dtests.security.manager=true \
-Dtests.locale=hr-HR \
-Dtests.timezone=Australia/Currie \
-Dcompiler.java=12 \
-Druntime.java=12
```
Build scan is https://gradle-enterprise.elastic.co/s/5f3u5v42akdmu | non_main | permissionprecedencetests testdifferentcombinationsofindices failed the test permissionprecedencetests testdifferentcombinationsofindices failed today on ci on branch java lang assertionerror all incoming requests on node should have finished expected but got expected but was the logs show the following warning exception caught on transport layer closing connection javax net ssl sslexception closed engine without receiving the close alert message at org elasticsearch xpack security transport nio ssldriver close ssldriver java at org elasticsearch core internal io ioutils close ioutils java at org elasticsearch core internal io ioutils close ioutils java at org elasticsearch xpack security transport nio sslchannelcontext closefromselector sslchannelcontext java at org elasticsearch nio eventhandler handleclose eventhandler java at org elasticsearch nio nioselector closechannel nioselector java at org elasticsearch nio nioselector queuechannelclose nioselector java at org elasticsearch xpack security transport nio sslchannelcontext channelclosetimeout sslchannelcontext java at org elasticsearch nio eventhandler handletask eventhandler java at org elasticsearch nio nioselector handletask nioselector java at org elasticsearch nio nioselector handlescheduledtasks nioselector java at org elasticsearch nio nioselector singleloop nioselector java at org elasticsearch nio nioselector runloop nioselector java at java lang thread run thread java it does not reproduce with gradlew x pack plugin security test tests org elasticsearch integration permissionprecedencetests testdifferentcombinationsofindices dtests seed dtests security manager true dtests locale hr hr dtests timezone australia currie dcompiler java druntime java build scan is | 0 |
129,064 | 10,561,561,839 | IssuesEvent | 2019-10-04 16:08:48 | MicrosoftDocs/vsts-docs | https://api.github.com/repos/MicrosoftDocs/vsts-docs | closed | "Show children" removed | Pri1 devops-test/tech devops/prod product-feedback | Why would you remove such a handy feature? Now I have to switch between reports and the test plan. Do you talk to users? Was this a common request?
Since it's not possible to query outcome of testing (Why is still a riddle to me, that's the one and only request PM has...) we could show all TC of a test plan. With that gone, no possibility to easily (copy, paste) bring it to Excel.
So one big wish: either enable a query for Outcome of TC or then bring back "Show children"
Thank you!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3c520dee-1218-777e-7405-551623817c03
* Version Independent ID: 82b5d172-ae1a-2b4c-82ac-595cd7609d3c
* Content: [New test plans page - Azure Test Plans](https://docs.microsoft.com/en-us/azure/devops/test/new-test-plans-page?view=azure-devops#feedback)
* Content Source: [docs/test/new-test-plans-page.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/test/new-test-plans-page.md)
* Product: **devops**
* Technology: **devops-test**
* GitHub Login: @harishkragarwal
* Microsoft Alias: **harishkragarwal** | 1.0 | "Show children" removed - Why would you remove such a handy feature? Now I have to switch between reports and the test plan. Do you talk to users? Was this a common request?
Since it's not possible to query outcome of testing (Why is still a riddle to me, that's the one and only request PM has...) we could show all TC of a test plan. With that gone, no possibility to easily (copy, paste) bring it to Excel.
So one big wish: either enable a query for Outcome of TC or then bring back "Show children"
Thank you!
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3c520dee-1218-777e-7405-551623817c03
* Version Independent ID: 82b5d172-ae1a-2b4c-82ac-595cd7609d3c
* Content: [New test plans page - Azure Test Plans](https://docs.microsoft.com/en-us/azure/devops/test/new-test-plans-page?view=azure-devops#feedback)
* Content Source: [docs/test/new-test-plans-page.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/test/new-test-plans-page.md)
* Product: **devops**
* Technology: **devops-test**
* GitHub Login: @harishkragarwal
* Microsoft Alias: **harishkragarwal** | non_main | show children removed why would you remove such a handy feature now i have to switch between reports and the test plan do you talk to users was this a common request since it s not possible to query outcome of testing why is still a riddle to me that s the one and only request pm has we could show all tc of a test plan with that gone no possibility to easily copy paste bring it to excel so one big wish either enable a query for outcome of tc or then bring back show children thank you document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops test github login harishkragarwal microsoft alias harishkragarwal | 0 |
18,245 | 10,225,128,738 | IssuesEvent | 2019-08-16 14:27:24 | rgb-org/spec | https://api.github.com/repos/rgb-org/spec | closed | Possible double-spend with double-commitment | 01-rgb A2-security bug | It is possible to commit to both P2C and OP_RETURN at the same time, creating two different proofs. It can be fixed by requiring that transactions with P2C commitments must not contain OP_RETURN outputs. | True | Possible double-spend with double-commitment - It is possible to commit to both P2C and OP_RETURN at the same time, creating two different proofs. It can be fixed by requiring that transactions with P2C commitments must not contain OP_RETURN outputs. | non_main | possible double spend with double commitment it is possible to commit to both and op return at the same time creating two different proofs it can be fixed by requiring that transactions with commitments must not contain op return outputs | 0 |
80,865 | 15,593,499,118 | IssuesEvent | 2021-03-18 13:00:56 | qbittorrent/qBittorrent | https://api.github.com/repos/qbittorrent/qBittorrent | closed | Consider using American Fuzzy Loop (AFL) | Code cleanup GH: Ghost Project management | Can you run qBittorrent with AFL http://lcamtuf.coredump.cx/afl/ ? It will reveal issues that are worth fixing, especially when they are security-related.
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/35560054-consider-using-american-fuzzy-loop-afl?utm_campaign=plugin&utm_content=tracker%2F298524&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F298524&utm_medium=issues&utm_source=github).
</bountysource-plugin> | 1.0 | Consider using American Fuzzy Loop (AFL) - Can you run qBittorrent with AFL http://lcamtuf.coredump.cx/afl/ ? It will reveal issues that are worth fixing, especially when they are security-related.
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/35560054-consider-using-american-fuzzy-loop-afl?utm_campaign=plugin&utm_content=tracker%2F298524&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F298524&utm_medium=issues&utm_source=github).
</bountysource-plugin> | non_main | consider using american fuzzy loop afl can you run qbittorrent with afl it will reveal issues that are worth fixing especially when they are security related want to back this issue we accept bounties via | 0 |
296,919 | 25,584,082,345 | IssuesEvent | 2022-12-01 07:54:06 | openBackhaul/ApplicationPattern | https://api.github.com/repos/openBackhaul/ApplicationPattern | opened | release-number pattern update | testsuite_to_be_changed | Pattern of release-number has been updated to '^([0-9]{1,2})\.([0-9]{1,2})\.([0-9]{1,2})$'.
Already, testcases are available to check for too short release-number, too-long release-number, letters in release-number, sign in release-number , incorrect separator.
Additionally a scenario can be added to test whether in each placeholder for a number, only two one or digits are allowed. In earlier release-number more than two digits are allowed in a placeholder
This scenario "multiple digit in a placeholder" can b:e added to following services:
- Service Layer - Acceptance :: Attribute correctness :: release-number checked?
- [ ] /v1/register-yourself - registry-office-application-release-number
- [ ] /v1/embed-yourself - registry-office-application-release-number
- [ ] /v1/redirect-service-request-information - service-log-application-release-number
- [ ] /v1/redirect-oam-request-information - oam-log-application-release-number
- [ ] /v1/end-subscription - subscriber-release-number
- [ ] /v1/inquire-oam-request-approvals - oam-approval-application-release-number
- [ ] /v1/update-client - old-application-release-number and new-application-release-number
- [ ] /v1/redirect-topology-change-information - topology-application-release-number
- [ ] /v1/update-operation-client - application-release-number
- Oam Layer:
- [ ] http-client/release-number :: Acceptance :: Attribute checked?
| 1.0 | release-number pattern update - Pattern of release-number has been updated to '^([0-9]{1,2})\.([0-9]{1,2})\.([0-9]{1,2})$'.
Already, testcases are available to check for too short release-number, too-long release-number, letters in release-number, sign in release-number , incorrect separator.
Additionally a scenario can be added to test whether in each placeholder for a number, only two one or digits are allowed. In earlier release-number more than two digits are allowed in a placeholder
This scenario "multiple digit in a placeholder" can b:e added to following services:
- Service Layer - Acceptance :: Attribute correctness :: release-number checked?
- [ ] /v1/register-yourself - registry-office-application-release-number
- [ ] /v1/embed-yourself - registry-office-application-release-number
- [ ] /v1/redirect-service-request-information - service-log-application-release-number
- [ ] /v1/redirect-oam-request-information - oam-log-application-release-number
- [ ] /v1/end-subscription - subscriber-release-number
- [ ] /v1/inquire-oam-request-approvals - oam-approval-application-release-number
- [ ] /v1/update-client - old-application-release-number and new-application-release-number
- [ ] /v1/redirect-topology-change-information - topology-application-release-number
- [ ] /v1/update-operation-client - application-release-number
- Oam Layer:
- [ ] http-client/release-number :: Acceptance :: Attribute checked?
| non_main | release number pattern update pattern of release number has been updated to already testcases are available to check for too short release number too long release number letters in release number sign in release number incorrect separator additionally a scenario can be added to test whether in each placeholder for a number only two one or digits are allowed in earlier release number more than two digits are allowed in a placeholder this scenario multiple digit in a placeholder can b e added to following services service layer acceptance attribute correctness release number checked register yourself registry office application release number embed yourself registry office application release number redirect service request information service log application release number redirect oam request information oam log application release number end subscription subscriber release number inquire oam request approvals oam approval application release number update client old application release number and new application release number redirect topology change information topology application release number update operation client application release number oam layer http client release number acceptance attribute checked | 0 |
916 | 4,621,653,846 | IssuesEvent | 2016-09-27 02:43:05 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | junos_command errors out with "TypeError: Type 'str' cannot be serialized" | affects_2.1 bug_report in progress networking P2 waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
junos_command core module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
$ ansible --version
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
No changes to configuration
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
$ uname -a
Linux dev-net-01 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
##### SUMMARY
<!--- Explain the problem briefly -->
I have an Ansible script where i am simply using junos_command module to get users list from Juniper switch, below is the snippet of my code. I keep getting the RuntimeWarning and TypeError: type 'str' cannot be serialized, whenever i try to run this. Moreover I have been successfully able to run commands like 'show version' using the below code itself. But just not 'show configuration system login' command. Please look into this.
**Script:**
name: / GET USERS / Get list of all the current users on switch
action: junos_command
args: { commands: 'show configuration system login',
provider: "{{ netconf }}" }
register: curr_users_on_switch
**Error:**
TASK [/ GET USERS / Get list of all the current users on switch] ***************
fatal: [rlab-er1]: FAILED! => {"changed": false, "failed": true, "module_stderr": "/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only!
\n warnings.warn(\"CLI command is for debug use only!\", RuntimeWarning)\nTraceback (most recent call last):
\n File \"/tmp/ansible_lVOmPp/ansible_module_junos_command.py\", line 261, in <module>
\n main()
\n File \"/tmp/ansible_lVOmPp/ansible_module_junos_command.py\", line 233, in main
\n xmlout.append(xml_to_string(response[index]))
\n File \"/tmp/ansible_lVOmPp/ansible_modlib.zip/ansible/module_utils/junos.py\", line 79, in xml_to_string\n File \"src/lxml/lxml.etree.pyx\", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized.
\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Mentioned in above section
<!--- Paste example playbooks or commands between quotes below -->
```
name: / GET USERS / Get list of all the current users on switch
action: junos_command
args: { commands: 'show configuration system login',
provider: "{{ netconf }}" }
register: curr_users_on_switch
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
returns the list of users on juniper switch. no error should be expected.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [/ GET USERS / Get list of all the current users on switch] ***************
<rlab-er1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `" && echo ansible-tmp-1472681123.92-107492843053729="` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `" ) && sleep 0'
<rlab-er1> PUT /tmp/tmpU9G6IE TO /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command
<rlab-er1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command; rm -rf "/home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/" > /dev/null 2>&1 && sleep 0'
fatal: [rlab-er1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "junos_command"}, "module_stderr": "/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only!\n warnings.warn(\"CLI command is for debug use only!\", RuntimeWarning)\nTraceback (most recent call last):\n File \"/tmp/ansible_mdpif7/ansible_module_junos_command.py\", line 261, in <module>\n main()\n File \"/tmp/ansible_mdpif7/ansible_module_junos_command.py\", line 233, in main\n xmlout.append(xml_to_string(response[index]))\n File \"/tmp/ansible_mdpif7/ansible_modlib.zip/ansible/module_utils/junos.py\", line 79, in xml_to_string\n File \"src/lxml/lxml.etree.pyx\", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized.\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
```
| True | junos_command errors out with "TypeError: Type 'str' cannot be serialized" - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
junos_command core module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
$ ansible --version
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
No changes to configuration
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
$ uname -a
Linux dev-net-01 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
##### SUMMARY
<!--- Explain the problem briefly -->
I have an Ansible script where i am simply using junos_command module to get users list from Juniper switch, below is the snippet of my code. I keep getting the RuntimeWarning and TypeError: type 'str' cannot be serialized, whenever i try to run this. Moreover I have been successfully able to run commands like 'show version' using the below code itself. But just not 'show configuration system login' command. Please look into this.
**Script:**
name: / GET USERS / Get list of all the current users on switch
action: junos_command
args: { commands: 'show configuration system login',
provider: "{{ netconf }}" }
register: curr_users_on_switch
**Error:**
TASK [/ GET USERS / Get list of all the current users on switch] ***************
fatal: [rlab-er1]: FAILED! => {"changed": false, "failed": true, "module_stderr": "/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only!
\n warnings.warn(\"CLI command is for debug use only!\", RuntimeWarning)\nTraceback (most recent call last):
\n File \"/tmp/ansible_lVOmPp/ansible_module_junos_command.py\", line 261, in <module>
\n main()
\n File \"/tmp/ansible_lVOmPp/ansible_module_junos_command.py\", line 233, in main
\n xmlout.append(xml_to_string(response[index]))
\n File \"/tmp/ansible_lVOmPp/ansible_modlib.zip/ansible/module_utils/junos.py\", line 79, in xml_to_string\n File \"src/lxml/lxml.etree.pyx\", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized.
\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Mentioned in above section
<!--- Paste example playbooks or commands between quotes below -->
```
name: / GET USERS / Get list of all the current users on switch
action: junos_command
args: { commands: 'show configuration system login',
provider: "{{ netconf }}" }
register: curr_users_on_switch
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
returns the list of users on juniper switch. no error should be expected.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [/ GET USERS / Get list of all the current users on switch] ***************
<rlab-er1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `" && echo ansible-tmp-1472681123.92-107492843053729="` echo $HOME/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729 `" ) && sleep 0'
<rlab-er1> PUT /tmp/tmpU9G6IE TO /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command
<rlab-er1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/junos_command; rm -rf "/home/mbhadoria/.ansible/tmp/ansible-tmp-1472681123.92-107492843053729/" > /dev/null 2>&1 && sleep 0'
fatal: [rlab-er1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "junos_command"}, "module_stderr": "/home/mbhadoria/.local/lib/python2.7/site-packages/jnpr/junos/device.py:429: RuntimeWarning: CLI command is for debug use only!\n warnings.warn(\"CLI command is for debug use only!\", RuntimeWarning)\nTraceback (most recent call last):\n File \"/tmp/ansible_mdpif7/ansible_module_junos_command.py\", line 261, in <module>\n main()\n File \"/tmp/ansible_mdpif7/ansible_module_junos_command.py\", line 233, in main\n xmlout.append(xml_to_string(response[index]))\n File \"/tmp/ansible_mdpif7/ansible_modlib.zip/ansible/module_utils/junos.py\", line 79, in xml_to_string\n File \"src/lxml/lxml.etree.pyx\", line 3350, in lxml.etree.tostring (src/lxml/lxml.etree.c:84534)\nTypeError: Type 'str' cannot be serialized.\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
```
| main | junos command errors out with typeerror type str cannot be serialized issue type bug report component name junos command core module ansible version ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no changes to configuration os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific uname a linux dev net generic ubuntu smp wed jul utc gnu linux summary i have an ansible script where i am simply using junos command module to get users list from juniper switch below is the snippet of my code i keep getting the runtimewarning and typeerror type str cannot be serialized whenever i try to run this moreover i have been successfully able to run commands like show version using the below code itself but just not show configuration system login command please look into this script name get users get list of all the current users on switch action junos command args commands show configuration system login provider netconf register curr users on switch error task fatal failed changed false failed true module stderr home mbhadoria local lib site packages jnpr junos device py runtimewarning cli command is for debug use only n warnings warn cli command is for debug use only runtimewarning ntraceback most recent call last n file tmp ansible lvompp ansible module junos command py line in n main n file tmp ansible lvompp ansible module junos command py line in main n xmlout append xml to string response n file tmp ansible lvompp ansible modlib zip ansible module utils junos py line in xml to string n file src lxml lxml etree pyx line in lxml etree tostring src lxml lxml etree c ntypeerror type str cannot be serialized n module stdout msg module failure parsed false steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used mentioned in above section name get users get list of all the current users on switch action junos command args commands show configuration system login provider netconf register curr users on switch expected results returns the list of users on juniper switch no error should be expected actual results task exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home mbhadoria ansible tmp ansible tmp junos command exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home mbhadoria ansible tmp ansible tmp junos command rm rf home mbhadoria ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module name junos command module stderr home mbhadoria local lib site packages jnpr junos device py runtimewarning cli command is for debug use only n warnings warn cli command is for debug use only runtimewarning ntraceback most recent call last n file tmp ansible ansible module junos command py line in n main n file tmp ansible ansible module junos command py line in main n xmlout append xml to string response n file tmp ansible ansible modlib zip ansible module utils junos py line in xml to string n file src lxml lxml etree pyx line in lxml etree tostring src lxml lxml etree c ntypeerror type str cannot be serialized n module stdout msg module failure parsed false | 1 |
6,900 | 7,781,382,288 | IssuesEvent | 2018-06-05 23:54:26 | Microsoft/visualfsharp | https://api.github.com/repos/Microsoft/visualfsharp | opened | Place cursor in indented location with automatic brace completion turned on | Area-IDE Language Service Feature Improvement | Today, with automatic brace completion, the cursor will put you where you perhaps don't want it to be:


Upon enter, the closing brace is undented appropriately. But the cursor is in a place where I need to reposition it to write code if I want that undentation. | 1.0 | Place cursor in indented location with automatic brace completion turned on - Today, with automatic brace completion, the cursor will put you where you perhaps don't want it to be:


Upon enter, the closing brace is undented appropriately. But the cursor is in a place where I need to reposition it to write code if I want that undentation. | non_main | place cursor in indented location with automatic brace completion turned on today with automatic brace completion the cursor will put you where you perhaps don t want it to be upon enter the closing brace is undented appropriately but the cursor is in a place where i need to reposition it to write code if i want that undentation | 0 |
1,448 | 6,287,561,281 | IssuesEvent | 2017-07-19 15:11:40 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | IAM module: Managed policies | affects_2.3 aws cloud feature_idea waiting_on_maintainer | **Issue Type:**
Feature Idea
**Summary:**
Support managed policies. Currently, only inline policies are supported, making it impossible to link a policy to several groups/users/roles. I reckon in the meantime this limitation could be made clearer in the doc.
| True | IAM module: Managed policies - **Issue Type:**
Feature Idea
**Summary:**
Support managed policies. Currently, only inline policies are supported, making it impossible to link a policy to several groups/users/roles. I reckon in the meantime this limitation could be made clearer in the doc.
| main | iam module managed policies issue type feature idea summary support managed policies currently only inline policies are supported making it impossible to link a policy to several groups users roles i reckon in the meantime this limitation could be made clearer in the doc | 1 |
129,988 | 17,953,630,701 | IssuesEvent | 2021-09-13 03:10:48 | valentinavolgina2/sunny-hikes | https://api.github.com/repos/valentinavolgina2/sunny-hikes | opened | [Profile] sub-menu text disappears while clicking on it | bug design | Environment:
https://seattlesunseeker-test.herokuapp.com
Browsers: latest Chrome, latest Safari
Steps:
1. Go to the website page
2. Click on Log in, type username and password, click on the Login button
3. Click on a profile menu and select any submenu (e.g. Messages)
Actual result: Sub-menu text disappears while clicking on it
Expected result: sub-menu should be available and visible.

| 1.0 | [Profile] sub-menu text disappears while clicking on it - Environment:
https://seattlesunseeker-test.herokuapp.com
Browsers: latest Chrome, latest Safari
Steps:
1. Go to the website page
2. Click on Log in, type username and password, click on the Login button
3. Click on a profile menu and select any submenu (e.g. Messages)
Actual result: Sub-menu text disappears while clicking on it
Expected result: sub-menu should be available and visible.

| non_main | sub menu text disappears while clicking on it environment browsers latest chrome latest safari steps go to the website page click on log in type username and password click on the login button click on a profile menu and select any submenu e g messages actual result sub menu text disappears while clicking on it expected result sub menu should be available and visible | 0 |
128,403 | 5,064,775,880 | IssuesEvent | 2016-12-23 08:44:45 | praekeltfoundation/gem-bbb-indo | https://api.github.com/repos/praekeltfoundation/gem-bbb-indo | closed | Add Email to Registration | enhancement priority - highest qa | _Language: Indonesian
Device Make: Xiaomi
Device Model: Mi5
App Version: 0.0.9
Logged by: Pippa_
Add a new field to registration: Email and enable registration to be successful if either email or mobile number is completed (or both). Error text: _We need either your phone number or email address_
Two users during testing did not remember their phone number so we need to give them an option of inputting mobile number or email address or both. | 1.0 | Add Email to Registration - _Language: Indonesian
Device Make: Xiaomi
Device Model: Mi5
App Version: 0.0.9
Logged by: Pippa_
Add a new field to registration: Email and enable registration to be successful if either email or mobile number is completed (or both). Error text: _We need either your phone number or email address_
Two users during testing did not remember their phone number so we need to give them an option of inputting mobile number or email address or both. | non_main | add email to registration language indonesian device make xiaomi device model app version logged by pippa add a new field to registration email and enable registration to be successful if either email or mobile number is completed or both error text we need either your phone number or email address two users during testing did not remember their phone number so we need to give them an option of inputting mobile number or email address or both | 0 |
3,480 | 13,433,525,753 | IssuesEvent | 2020-09-07 09:57:33 | Kristinita/Erics-Green-Room | https://api.github.com/repos/Kristinita/Erics-Green-Room | opened | feat(training): время первого турнирного ответа при тренировках | mode need-maintainer tournaments | ### 1. Запрос
Неплохо было бы, если тренирующийся на уже отыгранном турнире имел бы то же время на ответы, какое имели участники турнира.
### 2. Пример реализации
#### 2.1. Метаданные вопроса
К вопросу добавляются следующие опциональные метаданные:
1. `*-fa-` (от «first answer») — время первого ответа на вопрос на турнире. Допустим, `*-fa-4.14` означает, что первый ответ на данный вопрос на турнире был дан за 4.14 секунды с момента появления вопроса.
1. `*-afa-` (от «after first answer») — время, которое давалось участникам турнира на ответ после первого ответа. Например, `*-afa-7` значит, что на ответ после времени, определённом в `*-fa-`, у тренирующегося есть 7 секунд.
#### 2.2. Тренировочный процесс
Тренирующийся запускает пакет, содержащий метаданные `*-fa-` и, возможно, `*-afa-`. Их можно получить парсингом логов викторин.
1. Если тренирующийся дал правильный ответ быстрее, чем самый быстрый игрок на турнире, т. е. сделал вэнк, бот Комнат пишет что-то вроде `Поздравляем! На этот вопрос Вы ответили быстрее всех игроков турнира!` Количество вэнков по окончании тренировки суммируется.
1. Если же `*-fa-4.14`, а тренирующийся не успел ответить за 4.14 секунды, бот Комнат пишет что-то вроде `Ответ на турнире был дан за 4.14 секунды`. При наличии `*-afa-`, например, `*-afa-7`, сообщение будет выглядеть следующим образом: `Ответ на турнире был дан за 4.14 секунды. У Вас есть ещё 7 секунд на ответ`.
#### 2.3. Показ результатов
Также неплохо было бы показывать тренирующемуся результаты игроков турнира с той же периодичностью, с какой они показываются игрокам турнира, чтобы тренирующийся имел возможность сравнения своих показателей с показателями игроков, участвовавших в турнире.
Например, в Знайке результаты лучших игроков показываются следующим образом:
```text
Знайка: Лучшие игроки турнира:
Знайка: 1. [Джи]фан_Саши. Ответов: 47
Знайка: 2. [BFG]Эндрио1. Ответов: 45
Знайка: 3. [Avis]Corvi. Ответов: 44
Знайка: 4. [NPA]alt_f4. Ответов: 44
Знайка: 5. [Bellissimo]В. Мр.. Ответов: 42
Знайка: 6. [Bellissimo]Але А.. Ответов: 42
Знайка: 7. [Neo]Флёр Ф.. Ответов: 39
Знайка: 8. [Neo]Dekadent. Ответов: 36
Знайка: 9. [Wi-Fi]Pickwick. Ответов: 36
Знайка: 10. [xumku]Salieri. Ответов: 36
Знайка: 11. [BFG]ельник. Ответов: 35
Знайка: 12. [Neo]The_The. Ответов: 34
Знайка: 13. [Neo]кОра. Ответов: 33
Знайка: 14. [Reboot]Курт К.. Ответов: 30
Знайка: 15. [Bellissimo]atikva. Ответов: 29
Знайка: 16. [BFG]СНП. Ответов: 28
Знайка: 17. [Reboot]bkmz. Ответов: 27
Знайка: 18. [Reboot]тихоход. Ответов: 27
Знайка: 19. [Llirik]Physic. Ответов: 27
Знайка: 20. [Bellissimo]♥Abigail. Ответов: 25
Знайка: 21. [BFG]игристая. Ответов: 25
```
Можно это распарсить и показывать тренирующемуся по ходу отыгрыша пакета.
### 3. Аргументация
Польза для:
#### 3.1. Игроков
С добавлением вышеуказанных метаданных тренировка получается приближённой к реальным турнирным условиям. Одно дело отвечать, имея много подсказок и времени на ответ — результат наверняка будет выше, чем ты обыкновенно показываешь на турнирах. Другое — турнирные условия, когда лучшие из соперников моментально дают правильные ответы. Тренировки с данным нововведением помогут тренирующемуся быть подготовленным к турнирным условиям, адаптировать мышление к ним.
#### 3.2. Организаторов
При добавлении метаданных тренирующиеся будут иметь то же время на ответ, что и участники турнира. Да, остаются психологические моменты и ответы иногда в чате светят; однако в целом тренирующийся будет находиться в условиях, аналогичных тем, в которых находились игроки на турнире.
Организаторы турниров же смогут проводить повторные отыгрыши турниров. Результаты повторных отыгрышей можно будет более или менее объективно сравнивать с результатами первого.
### 4. Примеры для различных всполлардов
#### 4.1. Знайка
##### 4.1.1. Правила
Просмотрев [**свои видео**](https://vk.com/videos217885002?section=album_3) я определил, что по крайней мере тогда, когда я там играл, правила были следующими.
1. Игрокам на ответ даётся 18 секунд.
1. Если за это время никто не ответил, то даётся подсказка и ещё 12 секунд.
1. Если и за 30 секунд не ответил никто, показывается ответ.
##### 4.1.2. Пример лога
```text
Знайка: Тема следующих вопросов - В ответе на первый вопрос: фамилии и не только
Флёр Ф.: ох
фан_Саши: ок
Знайка: Вопрос 108/121: Актриса Лора Хоуп Круз снялась в эпизодической роли в "Унесённых ветром". А была она ЕГО тётей. Напишите фамилию племянника. (5 букв. Автор - Barca)
UmnichkА: ойой
Знайка: [Джи]фан_Саши дал первым ответ за 5.3 секунд. Заработал 50 очков команде Джи. Всего очков 3020
ельник: грант
alt_f4: оскар
дашa: трамп
Знайка: Верный ответ - Гейбл.
Знайка: Все ответившие - Але А., СНП, Флёр Ф., фан_Саши, Dekadent, Corvi, такс, Salieri
```
##### 4.1.3. Реализация
Парсим лог → заносим в пакет результаты следующим образом:
```text
Актриса Лора Хоуп Круз снялась в эпизодической роли в "Унесённых ветром". А была она ЕГО тётей. Напишите фамилию племянника.*Гейбл*-fa-5.3
```
Если тренирующийся не успел ответить за 5.3 секунды, ему показывается сообщение `Ответ на турнире был дан за 5.3 секунды`.
1. Если значение `fa` ⩽ 18, то тренирующийся должен уложиться в 18 секунд.
1. Если 18 < `fa` ⩽ 30, то, соответственно, уложиться в 30.
#### 4.2. Филимания
##### 4.2.1. Правила
По крайней мере, до 2018 года количество времени на ответ зависело от первого ответившего. Самый быстрый ответил → бот показывает время, за которое должны уложиться остальные со своим ответом.
Подсказки в Филимании в разные годы появлялись через различное количество секунд. Какие значения были у конкретных турниров, можно вычислить по логам этих турниров.
##### 4.2.2. Пример лога
```text
[21:48:50]
}V{:
—————————————————————— Вопрос №108: Крупнейший в мире вантовый мост соединяет Владивосток с ЭТИМ островом —————————————————————— Подсказка: ^^^^^^^ 7 букв
[21:48:53]
}o{:
[Джи]Брайант
даёт верный ответ! В запасе ещё 6 сек.
[21:48:59]
}V{:
[Джи]Брайант
раньше всех отвечает "Русский". Всего правильных ответов: 53 (из них первых: 14)
Результаты команд: Neo (346+2), reboot (308+4), <kbd>F1</kbd> (267+3), Джи (249+4), OK (215+3), БЛЭК (213+3), йо (49), Aлкан (36+1)
--------------------------------
Также верно ответили: [F1]oplada, ЛК_, [OK]stab, [reboot]namor, [Джи]Эндрио, [reboot]Андрей, [Neo]Вега_Лиры, свинятина, [БЛЭК]Берегиня, [reboot]пенициллин, [Джи]Мурка, [Джи]kotэ, [reboot]Орел_Я, [OK]60дюймов, [Aлкан]руки-ноги, [БЛЭК]Doc, [OK]Smilla, [F1]mix12, [Neo]Flaccus, apsp, [F1]HRUST, [БЛЭК]Кусаригама
```
##### 4.2.3. Реализация
Парсим лог → в пакет будет занесено следующее:
```text
Крупнейший в мире вантовый мост соединяет Владивосток с ЭТИМ островом*Русский*-fa-3*-afa-6
```
1. Время первого ответа, к сожалению, в Филимании не показывается. Определяем его вычитанием: `[21:48:53]` — `[21:48:50]`. Точность, к сожалению, возможна только до секунд.
1. Значение `*-afa-` — `N` во фразе `В запасе ещё N секунд`.
#### 4.3. Комнаты Эрика
Когда турниры будут отыгрываться непосредственно в самих Комнатах Эрика, по окончании турнира желательно, чтобы бот автоматически заносил значения `fa` и `afa` в пакеты.
Спасибо. | True | feat(training): время первого турнирного ответа при тренировках - ### 1. Запрос
Неплохо было бы, если тренирующийся на уже отыгранном турнире имел бы то же время на ответы, какое имели участники турнира.
### 2. Пример реализации
#### 2.1. Метаданные вопроса
К вопросу добавляются следующие опциональные метаданные:
1. `*-fa-` (от «first answer») — время первого ответа на вопрос на турнире. Допустим, `*-fa-4.14` означает, что первый ответ на данный вопрос на турнире был дан за 4.14 секунды с момента появления вопроса.
1. `*-afa-` (от «after first answer») — время, которое давалось участникам турнира на ответ после первого ответа. Например, `*-afa-7` значит, что на ответ после времени, определённом в `*-fa-`, у тренирующегося есть 7 секунд.
#### 2.2. Тренировочный процесс
Тренирующийся запускает пакет, содержащий метаданные `*-fa-` и, возможно, `*-afa-`. Их можно получить парсингом логов викторин.
1. Если тренирующийся дал правильный ответ быстрее, чем самый быстрый игрок на турнире, т. е. сделал вэнк, бот Комнат пишет что-то вроде `Поздравляем! На этот вопрос Вы ответили быстрее всех игроков турнира!` Количество вэнков по окончании тренировки суммируется.
1. Если же `*-fa-4.14`, а тренирующийся не успел ответить за 4.14 секунды, бот Комнат пишет что-то вроде `Ответ на турнире был дан за 4.14 секунды`. При наличии `*-afa-`, например, `*-afa-7`, сообщение будет выглядеть следующим образом: `Ответ на турнире был дан за 4.14 секунды. У Вас есть ещё 7 секунд на ответ`.
#### 2.3. Показ результатов
Также неплохо было бы показывать тренирующемуся результаты игроков турнира с той же периодичностью, с какой они показываются игрокам турнира, чтобы тренирующийся имел возможность сравнения своих показателей с показателями игроков, участвовавших в турнире.
Например, в Знайке результаты лучших игроков показываются следующим образом:
```text
Знайка: Лучшие игроки турнира:
Знайка: 1. [Джи]фан_Саши. Ответов: 47
Знайка: 2. [BFG]Эндрио1. Ответов: 45
Знайка: 3. [Avis]Corvi. Ответов: 44
Знайка: 4. [NPA]alt_f4. Ответов: 44
Знайка: 5. [Bellissimo]В. Мр.. Ответов: 42
Знайка: 6. [Bellissimo]Але А.. Ответов: 42
Знайка: 7. [Neo]Флёр Ф.. Ответов: 39
Знайка: 8. [Neo]Dekadent. Ответов: 36
Знайка: 9. [Wi-Fi]Pickwick. Ответов: 36
Знайка: 10. [xumku]Salieri. Ответов: 36
Знайка: 11. [BFG]ельник. Ответов: 35
Знайка: 12. [Neo]The_The. Ответов: 34
Знайка: 13. [Neo]кОра. Ответов: 33
Знайка: 14. [Reboot]Курт К.. Ответов: 30
Знайка: 15. [Bellissimo]atikva. Ответов: 29
Знайка: 16. [BFG]СНП. Ответов: 28
Знайка: 17. [Reboot]bkmz. Ответов: 27
Знайка: 18. [Reboot]тихоход. Ответов: 27
Знайка: 19. [Llirik]Physic. Ответов: 27
Знайка: 20. [Bellissimo]♥Abigail. Ответов: 25
Знайка: 21. [BFG]игристая. Ответов: 25
```
Можно это распарсить и показывать тренирующемуся по ходу отыгрыша пакета.
### 3. Аргументация
Польза для:
#### 3.1. Игроков
С добавлением вышеуказанных метаданных тренировка получается приближённой к реальным турнирным условиям. Одно дело отвечать, имея много подсказок и времени на ответ — результат наверняка будет выше, чем ты обыкновенно показываешь на турнирах. Другое — турнирные условия, когда лучшие из соперников моментально дают правильные ответы. Тренировки с данным нововведением помогут тренирующемуся быть подготовленным к турнирным условиям, адаптировать мышление к ним.
#### 3.2. Организаторов
При добавлении метаданных тренирующиеся будут иметь то же время на ответ, что и участники турнира. Да, остаются психологические моменты и ответы иногда в чате светят; однако в целом тренирующийся будет находиться в условиях, аналогичных тем, в которых находились игроки на турнире.
Организаторы турниров же смогут проводить повторные отыгрыши турниров. Результаты повторных отыгрышей можно будет более или менее объективно сравнивать с результатами первого.
### 4. Примеры для различных всполлардов
#### 4.1. Знайка
##### 4.1.1. Правила
Просмотрев [**свои видео**](https://vk.com/videos217885002?section=album_3) я определил, что по крайней мере тогда, когда я там играл, правила были следующими.
1. Игрокам на ответ даётся 18 секунд.
1. Если за это время никто не ответил, то даётся подсказка и ещё 12 секунд.
1. Если и за 30 секунд не ответил никто, показывается ответ.
##### 4.1.2. Пример лога
```text
Знайка: Тема следующих вопросов - В ответе на первый вопрос: фамилии и не только
Флёр Ф.: ох
фан_Саши: ок
Знайка: Вопрос 108/121: Актриса Лора Хоуп Круз снялась в эпизодической роли в "Унесённых ветром". А была она ЕГО тётей. Напишите фамилию племянника. (5 букв. Автор - Barca)
UmnichkА: ойой
Знайка: [Джи]фан_Саши дал первым ответ за 5.3 секунд. Заработал 50 очков команде Джи. Всего очков 3020
ельник: грант
alt_f4: оскар
дашa: трамп
Знайка: Верный ответ - Гейбл.
Знайка: Все ответившие - Але А., СНП, Флёр Ф., фан_Саши, Dekadent, Corvi, такс, Salieri
```
##### 4.1.3. Реализация
Парсим лог → заносим в пакет результаты следующим образом:
```text
Актриса Лора Хоуп Круз снялась в эпизодической роли в "Унесённых ветром". А была она ЕГО тётей. Напишите фамилию племянника.*Гейбл*-fa-5.3
```
Если тренирующийся не успел ответить за 5.3 секунды, ему показывается сообщение `Ответ на турнире был дан за 5.3 секунды`.
1. Если значение `fa` ⩽ 18, то тренирующийся должен уложиться в 18 секунд.
1. Если 18 < `fa` ⩽ 30, то, соответственно, уложиться в 30.
#### 4.2. Филимания
##### 4.2.1. Правила
По крайней мере, до 2018 года количество времени на ответ зависело от первого ответившего. Самый быстрый ответил → бот показывает время, за которое должны уложиться остальные со своим ответом.
Подсказки в Филимании в разные годы появлялись через различное количество секунд. Какие значения были у конкретных турниров, можно вычислить по логам этих турниров.
##### 4.2.2. Пример лога
```text
[21:48:50]
}V{:
—————————————————————— Вопрос №108: Крупнейший в мире вантовый мост соединяет Владивосток с ЭТИМ островом —————————————————————— Подсказка: ^^^^^^^ 7 букв
[21:48:53]
}o{:
[Джи]Брайант
даёт верный ответ! В запасе ещё 6 сек.
[21:48:59]
}V{:
[Джи]Брайант
раньше всех отвечает "Русский". Всего правильных ответов: 53 (из них первых: 14)
Результаты команд: Neo (346+2), reboot (308+4), <kbd>F1</kbd> (267+3), Джи (249+4), OK (215+3), БЛЭК (213+3), йо (49), Aлкан (36+1)
--------------------------------
Также верно ответили: [F1]oplada, ЛК_, [OK]stab, [reboot]namor, [Джи]Эндрио, [reboot]Андрей, [Neo]Вега_Лиры, свинятина, [БЛЭК]Берегиня, [reboot]пенициллин, [Джи]Мурка, [Джи]kotэ, [reboot]Орел_Я, [OK]60дюймов, [Aлкан]руки-ноги, [БЛЭК]Doc, [OK]Smilla, [F1]mix12, [Neo]Flaccus, apsp, [F1]HRUST, [БЛЭК]Кусаригама
```
##### 4.2.3. Реализация
Парсим лог → в пакет будет занесено следующее:
```text
Крупнейший в мире вантовый мост соединяет Владивосток с ЭТИМ островом*Русский*-fa-3*-afa-6
```
1. Время первого ответа, к сожалению, в Филимании не показывается. Определяем его вычитанием: `[21:48:53]` — `[21:48:50]`. Точность, к сожалению, возможна только до секунд.
1. Значение `*-afa-` — `N` во фразе `В запасе ещё N секунд`.
#### 4.3. Комнаты Эрика
Когда турниры будут отыгрываться непосредственно в самих Комнатах Эрика, по окончании турнира желательно, чтобы бот автоматически заносил значения `fa` и `afa` в пакеты.
Спасибо. | main | feat training время первого турнирного ответа при тренировках запрос неплохо было бы если тренирующийся на уже отыгранном турнире имел бы то же время на ответы какое имели участники турнира пример реализации метаданные вопроса к вопросу добавляются следующие опциональные метаданные fa от «first answer» — время первого ответа на вопрос на турнире допустим fa означает что первый ответ на данный вопрос на турнире был дан за секунды с момента появления вопроса afa от «after first answer» — время которое давалось участникам турнира на ответ после первого ответа например afa значит что на ответ после времени определённом в fa у тренирующегося есть секунд тренировочный процесс тренирующийся запускает пакет содержащий метаданные fa и возможно afa их можно получить парсингом логов викторин если тренирующийся дал правильный ответ быстрее чем самый быстрый игрок на турнире т е сделал вэнк бот комнат пишет что то вроде поздравляем на этот вопрос вы ответили быстрее всех игроков турнира количество вэнков по окончании тренировки суммируется если же fa а тренирующийся не успел ответить за секунды бот комнат пишет что то вроде ответ на турнире был дан за секунды при наличии afa например afa сообщение будет выглядеть следующим образом ответ на турнире был дан за секунды у вас есть ещё секунд на ответ показ результатов также неплохо было бы показывать тренирующемуся результаты игроков турнира с той же периодичностью с какой они показываются игрокам турнира чтобы тренирующийся имел возможность сравнения своих показателей с показателями игроков участвовавших в турнире например в знайке результаты лучших игроков показываются следующим образом text знайка лучшие игроки турнира знайка фан саши ответов знайка ответов знайка corvi ответов знайка alt ответов знайка в мр ответов знайка але а ответов знайка флёр ф ответов знайка dekadent ответов знайка pickwick ответов знайка salieri ответов знайка ельник ответов знайка the the ответов знайка кора ответов знайка курт к ответов знайка atikva ответов знайка снп ответов знайка bkmz ответов знайка тихоход ответов знайка physic ответов знайка ♥abigail ответов знайка игристая ответов можно это распарсить и показывать тренирующемуся по ходу отыгрыша пакета аргументация польза для игроков с добавлением вышеуказанных метаданных тренировка получается приближённой к реальным турнирным условиям одно дело отвечать имея много подсказок и времени на ответ — результат наверняка будет выше чем ты обыкновенно показываешь на турнирах другое — турнирные условия когда лучшие из соперников моментально дают правильные ответы тренировки с данным нововведением помогут тренирующемуся быть подготовленным к турнирным условиям адаптировать мышление к ним организаторов при добавлении метаданных тренирующиеся будут иметь то же время на ответ что и участники турнира да остаются психологические моменты и ответы иногда в чате светят однако в целом тренирующийся будет находиться в условиях аналогичных тем в которых находились игроки на турнире организаторы турниров же смогут проводить повторные отыгрыши турниров результаты повторных отыгрышей можно будет более или менее объективно сравнивать с результатами первого примеры для различных всполлардов знайка правила просмотрев я определил что по крайней мере тогда когда я там играл правила были следующими игрокам на ответ даётся секунд если за это время никто не ответил то даётся подсказка и ещё секунд если и за секунд не ответил никто показывается ответ пример лога text знайка тема следующих вопросов в ответе на первый вопрос фамилии и не только флёр ф ох фан саши ок знайка вопрос актриса лора хоуп круз снялась в эпизодической роли в унесённых ветром а была она его тётей напишите фамилию племянника букв автор barca umnichkа ойой знайка фан саши дал первым ответ за секунд заработал очков команде джи всего очков ельник грант alt оскар дашa трамп знайка верный ответ гейбл знайка все ответившие але а снп флёр ф фан саши dekadent corvi такс salieri реализация парсим лог → заносим в пакет результаты следующим образом text актриса лора хоуп круз снялась в эпизодической роли в унесённых ветром а была она его тётей напишите фамилию племянника гейбл fa если тренирующийся не успел ответить за секунды ему показывается сообщение ответ на турнире был дан за секунды если значение fa ⩽ то тренирующийся должен уложиться в секунд если fa ⩽ то соответственно уложиться в филимания правила по крайней мере до года количество времени на ответ зависело от первого ответившего самый быстрый ответил → бот показывает время за которое должны уложиться остальные со своим ответом подсказки в филимании в разные годы появлялись через различное количество секунд какие значения были у конкретных турниров можно вычислить по логам этих турниров пример лога text v —————————————————————— вопрос № крупнейший в мире вантовый мост соединяет владивосток с этим островом —————————————————————— подсказка букв o брайант даёт верный ответ в запасе ещё сек v брайант раньше всех отвечает русский всего правильных ответов из них первых результаты команд neo reboot джи ok блэк йо aлкан также верно ответили oplada лк stab namor эндрио андрей вега лиры свинятина берегиня пенициллин мурка kotэ орел я руки ноги doc smilla flaccus apsp hrust кусаригама реализация парсим лог → в пакет будет занесено следующее text крупнейший в мире вантовый мост соединяет владивосток с этим островом русский fa afa время первого ответа к сожалению в филимании не показывается определяем его вычитанием — точность к сожалению возможна только до секунд значение afa — n во фразе в запасе ещё n секунд комнаты эрика когда турниры будут отыгрываться непосредственно в самих комнатах эрика по окончании турнира желательно чтобы бот автоматически заносил значения fa и afa в пакеты спасибо | 1 |
281,732 | 21,315,432,263 | IssuesEvent | 2022-04-16 07:26:18 | atmh/pe | https://api.github.com/repos/atmh/pe | opened | DeveloperGuide diagram on "Adding Priority Level feature" | type.DocumentationBug severity.VeryLow | The black words in this diagram is barely readable, white color would be a better choice:

<!--session: 1650087931982-ba5794c8-462a-46a9-b23b-53847f667e01-->
<!--Version: Web v3.4.2--> | 1.0 | DeveloperGuide diagram on "Adding Priority Level feature" - The black words in this diagram is barely readable, white color would be a better choice:

<!--session: 1650087931982-ba5794c8-462a-46a9-b23b-53847f667e01-->
<!--Version: Web v3.4.2--> | non_main | developerguide diagram on adding priority level feature the black words in this diagram is barely readable white color would be a better choice | 0 |
385,518 | 26,640,720,709 | IssuesEvent | 2023-01-25 04:22:57 | tuqulore/vue-3-practices | https://api.github.com/repos/tuqulore/vue-3-practices | opened | StackBlitz 環境を再作成するタイミングのドキュメンテーション | documentation | 毎回新規URL(たとえば https://stackblitz.com/fork/github/tuqulore/vue-3-practices/tree/main/handson-vue?file=src/App.vue&terminal=dev)からStackBlitz 環境を作成すると動作する状態になるまで~1分程度かかる場合がある。受講者にとっては学習のノイズとなりうる。
そのため、特に指定がない限り同じ環境を使いまわすようにスライドに明示した方がよりよいかとおもった | 1.0 | StackBlitz 環境を再作成するタイミングのドキュメンテーション - 毎回新規URL(たとえば https://stackblitz.com/fork/github/tuqulore/vue-3-practices/tree/main/handson-vue?file=src/App.vue&terminal=dev)からStackBlitz 環境を作成すると動作する状態になるまで~1分程度かかる場合がある。受講者にとっては学習のノイズとなりうる。
そのため、特に指定がない限り同じ環境を使いまわすようにスライドに明示した方がよりよいかとおもった | non_main | stackblitz 環境を再作成するタイミングのドキュメンテーション 毎回新規url(たとえば 環境を作成すると動作する状態になるまで~ 。受講者にとっては学習のノイズとなりうる。 そのため、特に指定がない限り同じ環境を使いまわすようにスライドに明示した方がよりよいかとおもった | 0 |
2,172 | 7,612,916,028 | IssuesEvent | 2018-05-01 19:16:44 | Microsoft/DirectXTK | https://api.github.com/repos/Microsoft/DirectXTK | closed | Retire Windows 8.1 Store, Windows phone 8.1, and VS 2013 projects | maintainence | At some point we should remove support for these older versions in favor of UWP apps
`DirectXTK_Windows81.vcxproj`
`DirectXTK_WindowsPhone81.vcxproj`
`DirectXTK_XAMLSilverlight_WindowsPhone81.vcxproj`
This would also be a good time to drop VS 2013 entirely:
`DirectXTK_Desktop_2013.vcxproj`
`DirectXTK_Desktop_2013_DXSDK`
Please put any requests for continued support for one or more of these here.
| True | Retire Windows 8.1 Store, Windows phone 8.1, and VS 2013 projects - At some point we should remove support for these older versions in favor of UWP apps
`DirectXTK_Windows81.vcxproj`
`DirectXTK_WindowsPhone81.vcxproj`
`DirectXTK_XAMLSilverlight_WindowsPhone81.vcxproj`
This would also be a good time to drop VS 2013 entirely:
`DirectXTK_Desktop_2013.vcxproj`
`DirectXTK_Desktop_2013_DXSDK`
Please put any requests for continued support for one or more of these here.
| main | retire windows store windows phone and vs projects at some point we should remove support for these older versions in favor of uwp apps directxtk vcxproj directxtk vcxproj directxtk xamlsilverlight vcxproj this would also be a good time to drop vs entirely directxtk desktop vcxproj directxtk desktop dxsdk please put any requests for continued support for one or more of these here | 1 |
345 | 3,222,670,266 | IssuesEvent | 2015-10-09 03:22:18 | Homebrew/homebrew | https://api.github.com/repos/Homebrew/homebrew | closed | Promote z3 from homebrew-science | maintainer feedback | Right now the Z3 SMT solver is only available through [homebrew-science](https://github.com/Homebrew/homebrew-science/blob/master/z3.rb). Other solvers like [CVC4](https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cvc4.rb) are available on mainline homebrew, and SMT solvers generally are being used more as dependencies for [other software](http://goto.ucsd.edu/~rjhala/liquid/haskell/blog/about/) and [programming languages](https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cryptol.rb). It would be nice to have more than just CVC4 to choose from without tapping homebrew-science.
It looks like this was proposed a couple years ago in #16188 and #21509, but at the time it was more difficult to build and was not available under an open source license. Hopefully there are no longer any blockers to including it in the main repo. | True | Promote z3 from homebrew-science - Right now the Z3 SMT solver is only available through [homebrew-science](https://github.com/Homebrew/homebrew-science/blob/master/z3.rb). Other solvers like [CVC4](https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cvc4.rb) are available on mainline homebrew, and SMT solvers generally are being used more as dependencies for [other software](http://goto.ucsd.edu/~rjhala/liquid/haskell/blog/about/) and [programming languages](https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cryptol.rb). It would be nice to have more than just CVC4 to choose from without tapping homebrew-science.
It looks like this was proposed a couple years ago in #16188 and #21509, but at the time it was more difficult to build and was not available under an open source license. Hopefully there are no longer any blockers to including it in the main repo. | main | promote from homebrew science right now the smt solver is only available through other solvers like are available on mainline homebrew and smt solvers generally are being used more as dependencies for and it would be nice to have more than just to choose from without tapping homebrew science it looks like this was proposed a couple years ago in and but at the time it was more difficult to build and was not available under an open source license hopefully there are no longer any blockers to including it in the main repo | 1 |
3,517 | 13,779,975,156 | IssuesEvent | 2020-10-08 14:23:27 | exercism/python | https://api.github.com/repos/exercism/python | closed | CI: disable Travis-CI in favor of Github Actions | maintainer action required | GitHub Actions has matured to the point that Travis is now obsolete in this repository.
@exercism/python Can one of you take care of this please? | True | CI: disable Travis-CI in favor of Github Actions - GitHub Actions has matured to the point that Travis is now obsolete in this repository.
@exercism/python Can one of you take care of this please? | main | ci disable travis ci in favor of github actions github actions has matured to the point that travis is now obsolete in this repository exercism python can one of you take care of this please | 1 |
105,060 | 9,015,697,544 | IssuesEvent | 2019-02-06 04:32:02 | DaJoker29/write-always | https://api.github.com/repos/DaJoker29/write-always | closed | Server: Routes/Middleware Unit Tests | test | All the routes should be tested against the API...which I hasn't been written yet. | 1.0 | Server: Routes/Middleware Unit Tests - All the routes should be tested against the API...which I hasn't been written yet. | non_main | server routes middleware unit tests all the routes should be tested against the api which i hasn t been written yet | 0 |
1,894 | 6,577,538,586 | IssuesEvent | 2017-09-12 01:36:59 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | route53: always thinks alias record has changed | affects_2.0 aws bug_report cloud waiting_on_maintainer | ##### Issue Type:
<!-- Please pick one and delete the rest: -->
- Bug Report
##### Plugin Name:
route53
##### Ansible Version:
```
ansible 2.0.1.0
config file = /Users/spencer/src/khaki/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
none relevant
##### Environment:
OS X
##### Summary:
route53 command=create for alias record without overwrite=yes always fails when record exists. It correctly detects matching record if you are not using alias records.
##### Steps To Reproduce:
set up a task to create a simple alias record in Route 53 and run it
now try running it again, it fails
##### Example task
```
- name: Create A record alias for proxy node
route53:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
zone: "larkave.com"
command: create
type: A
alias: True
alias_hosted_zone_id: "{{ kube_proxy_zone_id }}"
value: "{{ kube_proxy_dns_name }}"
record: "proxy.{{ kube_dns_domain }}"
ttl: 300
```
#### Expected Results:
"ok: [localhost]"
##### Actual Results:
```
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"alias": true, "alias_hosted_zone_id": "Z33MTJ483KN6FU", "aws_access_key": "AKIAJJU7TXWYTYP4GQAA", "aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "command": "create", "ec2_url": null, "failover": null, "health_check": null, "hosted_zone_id": null, "identifier": null, "overwrite": null, "private_zone": false, "profile": null, "record": "proxy.larkave.com", "region": null, "retry_interval": 500, "security_token": null, "ttl": 300, "type": "A", "validate_certs": true, "value": "tst-kube-proxy-1257808881.us-west-2.elb.amazonaws.com", "vpc_id": null, "weight": null, "zone": "larkave.com"}, "module_name": "route53"}, "msg": "Record already exists with different value. Set 'overwrite' to replace it"}
```
| True | route53: always thinks alias record has changed - ##### Issue Type:
<!-- Please pick one and delete the rest: -->
- Bug Report
##### Plugin Name:
route53
##### Ansible Version:
```
ansible 2.0.1.0
config file = /Users/spencer/src/khaki/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
none relevant
##### Environment:
OS X
##### Summary:
route53 command=create for alias record without overwrite=yes always fails when record exists. It correctly detects matching record if you are not using alias records.
##### Steps To Reproduce:
set up a task to create a simple alias record in Route 53 and run it
now try running it again, it fails
##### Example task
```
- name: Create A record alias for proxy node
route53:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
zone: "larkave.com"
command: create
type: A
alias: True
alias_hosted_zone_id: "{{ kube_proxy_zone_id }}"
value: "{{ kube_proxy_dns_name }}"
record: "proxy.{{ kube_dns_domain }}"
ttl: 300
```
#### Expected Results:
"ok: [localhost]"
##### Actual Results:
```
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"alias": true, "alias_hosted_zone_id": "Z33MTJ483KN6FU", "aws_access_key": "AKIAJJU7TXWYTYP4GQAA", "aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "command": "create", "ec2_url": null, "failover": null, "health_check": null, "hosted_zone_id": null, "identifier": null, "overwrite": null, "private_zone": false, "profile": null, "record": "proxy.larkave.com", "region": null, "retry_interval": 500, "security_token": null, "ttl": 300, "type": "A", "validate_certs": true, "value": "tst-kube-proxy-1257808881.us-west-2.elb.amazonaws.com", "vpc_id": null, "weight": null, "zone": "larkave.com"}, "module_name": "route53"}, "msg": "Record already exists with different value. Set 'overwrite' to replace it"}
```
| main | always thinks alias record has changed issue type bug report plugin name ansible version ansible config file users spencer src khaki ansible ansible cfg configured module search path default w o overrides ansible configuration none relevant environment os x summary command create for alias record without overwrite yes always fails when record exists it correctly detects matching record if you are not using alias records steps to reproduce set up a task to create a simple alias record in route and run it now try running it again it fails example task name create a record alias for proxy node aws access key aws access key aws secret key aws secret key zone larkave com command create type a alias true alias hosted zone id kube proxy zone id value kube proxy dns name record proxy kube dns domain ttl expected results ok actual results fatal failed changed false failed true invocation module args alias true alias hosted zone id aws access key aws secret key value specified in no log parameter command create url null failover null health check null hosted zone id null identifier null overwrite null private zone false profile null record proxy larkave com region null retry interval security token null ttl type a validate certs true value tst kube proxy us west elb amazonaws com vpc id null weight null zone larkave com module name msg record already exists with different value set overwrite to replace it | 1 |
1,478 | 6,412,426,005 | IssuesEvent | 2017-08-08 03:12:01 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Permissions issue when copying directory | affects_2.1 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/task/feature -->
`copy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
$ ansible --version
ansible 2.1.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Absolutely nothing has changed in my config, but I _did_ upgrade Ansible from 2.0.2 right before the failure began.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
I'm unning Ansible from MacOS Sierra (a recent upgrade) to bring up an Ubuntu 14.04 server in a Vagrant/Virtualbox environment.
##### SUMMARY
<!--- Explain the problem briefly -->
I'm trying to copy a directory (and its files) from `<role-name>/files` to the file system and set appropriate permissions. Ansible seems to think I'm using symbolic permissions when copying a directory. I was running the playbook just fine, but a user was reporting this error and that user was running v2.1.2 so I upgraded. After the upgrade, I was got the issue as well.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
As far as I can tell, just run the task below under Ansible 2.1.2.
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Dotfiles | Install ViM customizations
become: yes
become_user: "{{ username }}"
copy:
src: .vim
dest: ~/
mode: 0664
directory_mode: 0775
force: yes
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
The directory should be copied and permissions set as specified.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
I get an error related to symbolic permissions.
<!--- Paste verbatim command output between quotes below -->
```
TASK [user : Dotfiles | Install ViM customizations] ****************************
fatal: [default]: FAILED! => {"changed": false, "checksum": "109d2e70b4a83619eec12768f976177e55168de1", "details": "bad symbolic permission for mode: 509", "failed": true, "gid": 1000, "group": "vagrant", "mode": "0775", "msg": "mode must be in octal or symbolic form", "owner": "vagrant", "path": "/home/vagrant/.vim", "size": 4096, "state": "directory", "uid": 1000}
```
| True | Permissions issue when copying directory - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/task/feature -->
`copy`
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
$ ansible --version
ansible 2.1.2.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Absolutely nothing has changed in my config, but I _did_ upgrade Ansible from 2.0.2 right before the failure began.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
I'm unning Ansible from MacOS Sierra (a recent upgrade) to bring up an Ubuntu 14.04 server in a Vagrant/Virtualbox environment.
##### SUMMARY
<!--- Explain the problem briefly -->
I'm trying to copy a directory (and its files) from `<role-name>/files` to the file system and set appropriate permissions. Ansible seems to think I'm using symbolic permissions when copying a directory. I was running the playbook just fine, but a user was reporting this error and that user was running v2.1.2 so I upgraded. After the upgrade, I was got the issue as well.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
As far as I can tell, just run the task below under Ansible 2.1.2.
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Dotfiles | Install ViM customizations
become: yes
become_user: "{{ username }}"
copy:
src: .vim
dest: ~/
mode: 0664
directory_mode: 0775
force: yes
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
The directory should be copied and permissions set as specified.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
I get an error related to symbolic permissions.
<!--- Paste verbatim command output between quotes below -->
```
TASK [user : Dotfiles | Install ViM customizations] ****************************
fatal: [default]: FAILED! => {"changed": false, "checksum": "109d2e70b4a83619eec12768f976177e55168de1", "details": "bad symbolic permission for mode: 509", "failed": true, "gid": 1000, "group": "vagrant", "mode": "0775", "msg": "mode must be in octal or symbolic form", "owner": "vagrant", "path": "/home/vagrant/.vim", "size": 4096, "state": "directory", "uid": 1000}
```
| main | permissions issue when copying directory issue type bug report component name copy ansible version ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables absolutely nothing has changed in my config but i did upgrade ansible from right before the failure began os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific i m unning ansible from macos sierra a recent upgrade to bring up an ubuntu server in a vagrant virtualbox environment summary i m trying to copy a directory and its files from files to the file system and set appropriate permissions ansible seems to think i m using symbolic permissions when copying a directory i was running the playbook just fine but a user was reporting this error and that user was running so i upgraded after the upgrade i was got the issue as well steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used as far as i can tell just run the task below under ansible name dotfiles install vim customizations become yes become user username copy src vim dest mode directory mode force yes expected results the directory should be copied and permissions set as specified actual results i get an error related to symbolic permissions task fatal failed changed false checksum details bad symbolic permission for mode failed true gid group vagrant mode msg mode must be in octal or symbolic form owner vagrant path home vagrant vim size state directory uid | 1 |
345,475 | 10,368,126,056 | IssuesEvent | 2019-09-07 14:29:11 | Goxiaoy/aanote | https://api.github.com/repos/Goxiaoy/aanote | opened | Create user's history statistic as first page of focusing page | priority: low | Statistics filter by :
- [ ] Yearlly
- [ ] Monthly
- [ ] Weekly
contain:
- [ ] total cost
- [ ] average
| 1.0 | Create user's history statistic as first page of focusing page - Statistics filter by :
- [ ] Yearlly
- [ ] Monthly
- [ ] Weekly
contain:
- [ ] total cost
- [ ] average
| non_main | create user s history statistic as first page of focusing page statistics filter by yearlly monthly weekly contain total cost average | 0 |
379 | 3,412,608,670 | IssuesEvent | 2015-12-06 01:06:26 | tgstation/-tg-station | https://api.github.com/repos/tgstation/-tg-station | closed | Caves.dm is broken and won't open | Bug Maintainability - Hinders improvements | On inspection, it appears to still have merge tags in it, probably due to an improper merge conflict fix.
@MMMiracles | True | Caves.dm is broken and won't open - On inspection, it appears to still have merge tags in it, probably due to an improper merge conflict fix.
@MMMiracles | main | caves dm is broken and won t open on inspection it appears to still have merge tags in it probably due to an improper merge conflict fix mmmiracles | 1 |
3,060 | 11,456,912,448 | IssuesEvent | 2020-02-06 22:17:56 | 18F/cg-product | https://api.github.com/repos/18F/cg-product | closed | Debug and fix Logstash pipeline intermittent/changing failures | contractor-2-troubleshooting contractor-3-maintainability operations stale | Every time the logsearch/logstash deployment pipeline runs, there's a roughly 20-30% chance that it will fail with a non-reproducible error message. This error changes from run to run.
- Typically, one or more services fails to either start or stop. After `ssh`ing in, there's no error message. This may be due to some timeout setting somewhere but the root cause is really unknown at this point.
- Staging seems to perform worse than production or development
- When the deployment gets stalled due to errors, the logsearch deployment blocks the entire deployment process until it's restarted
- Current fix: Rerun the job, maybe it works?
- Last successful staging deployment was August 5
- First failure on staging was August 28
- Last successful production deployment was August 2
- Job reruns when it gets new resources - we have much newer resources in development than staging or production
- Problem seems to be with the deployments rather than the actual software
@bengerman13 has been trying to fast-forward to get the deployment update to date - we consume `logsearch-for-cloud-foundry` and `logsearch-boshrelease`, combine them and create a new artifact, and then deploy that artifact using a dynamically generated bosh manifest.
## Next steps
- Determine the proximate cause
- We need better debug information on what's failing and why
- Pairing/dogpiling to get staging to run successfully more than once in a row
- We diverged from the two upstreams roughly 2 years ago, so there's foundational work to be done to make sure we don't break everything by upgrading (e.g. APIs)
- This is in progress, check with @bengerman13 before starting work on that step
## Acceptance Criteria
- [ ] Upon release of a new stemcell, the deployment pipeline rolls the update through with no manual intervention | True | Debug and fix Logstash pipeline intermittent/changing failures - Every time the logsearch/logstash deployment pipeline runs, there's a roughly 20-30% chance that it will fail with a non-reproducible error message. This error changes from run to run.
- Typically, one or more services fails to either start or stop. After `ssh`ing in, there's no error message. This may be due to some timeout setting somewhere but the root cause is really unknown at this point.
- Staging seems to perform worse than production or development
- When the deployment gets stalled due to errors, the logsearch deployment blocks the entire deployment process until it's restarted
- Current fix: Rerun the job, maybe it works?
- Last successful staging deployment was August 5
- First failure on staging was August 28
- Last successful production deployment was August 2
- Job reruns when it gets new resources - we have much newer resources in development than staging or production
- Problem seems to be with the deployments rather than the actual software
@bengerman13 has been trying to fast-forward to get the deployment update to date - we consume `logsearch-for-cloud-foundry` and `logsearch-boshrelease`, combine them and create a new artifact, and then deploy that artifact using a dynamically generated bosh manifest.
## Next steps
- Determine the proximate cause
- We need better debug information on what's failing and why
- Pairing/dogpiling to get staging to run successfully more than once in a row
- We diverged from the two upstreams roughly 2 years ago, so there's foundational work to be done to make sure we don't break everything by upgrading (e.g. APIs)
- This is in progress, check with @bengerman13 before starting work on that step
## Acceptance Criteria
- [ ] Upon release of a new stemcell, the deployment pipeline rolls the update through with no manual intervention | main | debug and fix logstash pipeline intermittent changing failures every time the logsearch logstash deployment pipeline runs there s a roughly chance that it will fail with a non reproducible error message this error changes from run to run typically one or more services fails to either start or stop after ssh ing in there s no error message this may be due to some timeout setting somewhere but the root cause is really unknown at this point staging seems to perform worse than production or development when the deployment gets stalled due to errors the logsearch deployment blocks the entire deployment process until it s restarted current fix rerun the job maybe it works last successful staging deployment was august first failure on staging was august last successful production deployment was august job reruns when it gets new resources we have much newer resources in development than staging or production problem seems to be with the deployments rather than the actual software has been trying to fast forward to get the deployment update to date we consume logsearch for cloud foundry and logsearch boshrelease combine them and create a new artifact and then deploy that artifact using a dynamically generated bosh manifest next steps determine the proximate cause we need better debug information on what s failing and why pairing dogpiling to get staging to run successfully more than once in a row we diverged from the two upstreams roughly years ago so there s foundational work to be done to make sure we don t break everything by upgrading e g apis this is in progress check with before starting work on that step acceptance criteria upon release of a new stemcell the deployment pipeline rolls the update through with no manual intervention | 1 |
5,646 | 28,371,430,432 | IssuesEvent | 2023-04-12 17:17:07 | deislabs/spiderlightning | https://api.github.com/repos/deislabs/spiderlightning | opened | "make build-c" fails to build sample apps | 🐛 bug 🚧 maintainer issue | **Description of the bug**
make build-c
make -C examples/multi_capability-demo-clang/ clean
make[1]: Entering directory '/home/squillace/work/squillace/spiderlightning/examples/multi_capability-demo-clang'
rm -rf bindings/
mkdir bindings/
make[1]: Leaving directory '/home/squillace/work/squillace/spiderlightning/examples/multi_capability-demo-clang'
make -C examples/multi_capability-demo-clang/ bindings
make[1]: Entering directory '/home/squillace/work/squillace/spiderlightning/examples/multi_capability-demo-clang'
wit-bindgen c --import ../../wit/keyvalue.wit --out-dir bindings/
Error: expected an identifier or string, found '('
--> ../../wit/keyvalue.wit:4:19
|
4 | static open: func(name: string) -> expected<keyvalue, keyvalue-error>
| ^
make[1]: *** [Makefile:21: bindings] Error 1
make[1]: Leaving directory '/home/squillace/work/squillace/spiderlightning/examples/multi_capability-demo-clang'
make: *** [Makefile:158: build-c] Error 2
**To Reproduce**
`make build-c`
**Additional context**
| True | "make build-c" fails to build sample apps - **Description of the bug**
make build-c
make -C examples/multi_capability-demo-clang/ clean
make[1]: Entering directory '/home/squillace/work/squillace/spiderlightning/examples/multi_capability-demo-clang'
rm -rf bindings/
mkdir bindings/
make[1]: Leaving directory '/home/squillace/work/squillace/spiderlightning/examples/multi_capability-demo-clang'
make -C examples/multi_capability-demo-clang/ bindings
make[1]: Entering directory '/home/squillace/work/squillace/spiderlightning/examples/multi_capability-demo-clang'
wit-bindgen c --import ../../wit/keyvalue.wit --out-dir bindings/
Error: expected an identifier or string, found '('
--> ../../wit/keyvalue.wit:4:19
|
4 | static open: func(name: string) -> expected<keyvalue, keyvalue-error>
| ^
make[1]: *** [Makefile:21: bindings] Error 1
make[1]: Leaving directory '/home/squillace/work/squillace/spiderlightning/examples/multi_capability-demo-clang'
make: *** [Makefile:158: build-c] Error 2
**To Reproduce**
`make build-c`
**Additional context**
| main | make build c fails to build sample apps description of the bug make build c make c examples multi capability demo clang clean make entering directory home squillace work squillace spiderlightning examples multi capability demo clang rm rf bindings mkdir bindings make leaving directory home squillace work squillace spiderlightning examples multi capability demo clang make c examples multi capability demo clang bindings make entering directory home squillace work squillace spiderlightning examples multi capability demo clang wit bindgen c import wit keyvalue wit out dir bindings error expected an identifier or string found wit keyvalue wit static open func name string expected make error make leaving directory home squillace work squillace spiderlightning examples multi capability demo clang make error to reproduce make build c additional context | 1 |
112,149 | 14,221,990,475 | IssuesEvent | 2020-11-17 16:21:01 | woocommerce/woocommerce-gutenberg-products-block | https://api.github.com/repos/woocommerce/woocommerce-gutenberg-products-block | closed | Block Idea: Shop Directory Block | action: needs design action: needs feedback type: enhancement ◼️ block: all products | Now that we have the All Products block and a number of Filter blocks available. I think there'd be value in creating a container block that has the All Products block setup with filter blocks in a curated layout so merchants can just drop this in a page and have it ready to go. Some thoughts:
- ability to adjust the layout of the inner blocks (all products, filters etc) selecting a patterns.
- everything contained in the container would be in it's own context so that search results/filter selections etc will be specific to that container. This will feasibly allow for merchants to setup containers on the same page for various targeted types of products.
Primary purpose behind this work would be to enable merchants to get started with a Shop Directory out of the box with little to no configuration. It could also be used in various contexts (and potentially replace the current shop page view). | 1.0 | Block Idea: Shop Directory Block - Now that we have the All Products block and a number of Filter blocks available. I think there'd be value in creating a container block that has the All Products block setup with filter blocks in a curated layout so merchants can just drop this in a page and have it ready to go. Some thoughts:
- ability to adjust the layout of the inner blocks (all products, filters etc) selecting a patterns.
- everything contained in the container would be in it's own context so that search results/filter selections etc will be specific to that container. This will feasibly allow for merchants to setup containers on the same page for various targeted types of products.
Primary purpose behind this work would be to enable merchants to get started with a Shop Directory out of the box with little to no configuration. It could also be used in various contexts (and potentially replace the current shop page view). | non_main | block idea shop directory block now that we have the all products block and a number of filter blocks available i think there d be value in creating a container block that has the all products block setup with filter blocks in a curated layout so merchants can just drop this in a page and have it ready to go some thoughts ability to adjust the layout of the inner blocks all products filters etc selecting a patterns everything contained in the container would be in it s own context so that search results filter selections etc will be specific to that container this will feasibly allow for merchants to setup containers on the same page for various targeted types of products primary purpose behind this work would be to enable merchants to get started with a shop directory out of the box with little to no configuration it could also be used in various contexts and potentially replace the current shop page view | 0 |
375,857 | 11,135,360,165 | IssuesEvent | 2019-12-20 14:12:13 | bounswe/bounswe2019group4 | https://api.github.com/repos/bounswe/bounswe2019group4 | closed | Frontend bug investment | Front-End Priority: High Status: In Progress Type: Bug | After the investment feature is added , some bugs occurred. For example, non-trader user can not enter the homepage now. | 1.0 | Frontend bug investment - After the investment feature is added , some bugs occurred. For example, non-trader user can not enter the homepage now. | non_main | frontend bug investment after the investment feature is added some bugs occurred for example non trader user can not enter the homepage now | 0 |
3,075 | 11,643,897,265 | IssuesEvent | 2020-02-29 16:10:03 | RalfKoban/MiKo-Analyzers | https://api.github.com/repos/RalfKoban/MiKo-Analyzers | closed | Do not raise events inside Lock statement | Area: analyzer Area: maintainability backlog feature | Raising events and therefore calling "outside" code inside `lock` statements is dangerous. Deadlocks could easily occur.
So, inside lock statements events should not be raised. | True | Do not raise events inside Lock statement - Raising events and therefore calling "outside" code inside `lock` statements is dangerous. Deadlocks could easily occur.
So, inside lock statements events should not be raised. | main | do not raise events inside lock statement raising events and therefore calling outside code inside lock statements is dangerous deadlocks could easily occur so inside lock statements events should not be raised | 1 |
70,380 | 23,146,513,806 | IssuesEvent | 2022-07-29 01:51:32 | zed-industries/feedback | https://api.github.com/repos/zed-industries/feedback | opened | Unicode characters of nonstandard width overlap one another as if they were monospaced | defect triage | ### Check for existing issues
- [X] Completed
### Describe the bug
In the following image, the ✚ and 2 overlap, and the ellipses extend into the closing parenthesis:
<img width="361" alt="Screen Shot 2022-07-28 at 6 47 37 PM" src="https://user-images.githubusercontent.com/33299860/181665828-46dfde71-e856-4910-b2ed-418007fde6b5.png">
For comparison, iTerm2:
<img width="263" alt="Screen Shot 2022-07-28 at 6 49 18 PM" src="https://user-images.githubusercontent.com/33299860/181665999-1cf6a36f-c23b-45af-a05b-0fdd1b8f9791.png">
### To reproduce
Copy/paste (main|✚2…) into the Zed terminal.
### Expected behavior
Characters should not overlap.
### Environment
Zed 0.49.1 – /Applications/Zed.app \nmacOS 12.1 \narchitecture x86_64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
_No response_ | 1.0 | Unicode characters of nonstandard width overlap one another as if they were monospaced - ### Check for existing issues
- [X] Completed
### Describe the bug
In the following image, the ✚ and 2 overlap, and the ellipses extend into the closing parenthesis:
<img width="361" alt="Screen Shot 2022-07-28 at 6 47 37 PM" src="https://user-images.githubusercontent.com/33299860/181665828-46dfde71-e856-4910-b2ed-418007fde6b5.png">
For comparison, iTerm2:
<img width="263" alt="Screen Shot 2022-07-28 at 6 49 18 PM" src="https://user-images.githubusercontent.com/33299860/181665999-1cf6a36f-c23b-45af-a05b-0fdd1b8f9791.png">
### To reproduce
Copy/paste (main|✚2…) into the Zed terminal.
### Expected behavior
Characters should not overlap.
### Environment
Zed 0.49.1 – /Applications/Zed.app \nmacOS 12.1 \narchitecture x86_64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
_No response_ | non_main | unicode characters of nonstandard width overlap one another as if they were monospaced check for existing issues completed describe the bug in the following image the ✚ and overlap and the ellipses extend into the closing parenthesis img width alt screen shot at pm src for comparison img width alt screen shot at pm src to reproduce copy paste main ✚ … into the zed terminal expected behavior characters should not overlap environment zed – applications zed app nmacos narchitecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue no response | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.