Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,521 | 8,655,460,356 | IssuesEvent | 2018-11-27 16:00:34 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | The PC has been disconnected | unmaintained | Every time I try to connect my PS Vita to QCMA, it connects for about 20 seconds, flashes a message for about a fifth of a second (too short for me to read what it says) followed by my computer giving me the "device connected" and "device disconnected" noises in succession, then my PS Vita gives me the message "The PC has been disconnected". My PS Vita is on FW 3.65, and QCMA 0.4.1. I'm running Windows 10.
I have tried about everything that I could find through google searches (reinstalling/replacing drivers, reinstalling QCMA) and none of them fix the problem. I checked through the issues posted here and none of them seemed to have the same problem as I did.
I tried to retrieve a log, but whenever I opened it in either console or the log mode I found in [here](https://github.com/codestation/qcma/issues/80), it would refuse to connect (it still connects in the regular mode).
Please... send help. | True | The PC has been disconnected - Every time I try to connect my PS Vita to QCMA, it connects for about 20 seconds, flashes a message for about a fifth of a second (too short for me to read what it says) followed by my computer giving me the "device connected" and "device disconnected" noises in succession, then my PS Vita gives me the message "The PC has been disconnected". My PS Vita is on FW 3.65, and QCMA 0.4.1. I'm running Windows 10.
I have tried about everything that I could find through google searches (reinstalling/replacing drivers, reinstalling QCMA) and none of them fix the problem. I checked through the issues posted here and none of them seemed to have the same problem as I did.
I tried to retrieve a log, but whenever I opened it in either console or the log mode I found in [here](https://github.com/codestation/qcma/issues/80), it would refuse to connect (it still connects in the regular mode).
Please... send help. | main | the pc has been disconnected every time i try to connect my ps vita to qcma it connects for about seconds flashes a message for about a fifth of a second too short for me to read what it says followed by my computer giving me the device connected and device disconnected noises in succession then my ps vita gives me the message the pc has been disconnected my ps vita is on fw and qcma i m running windows i have tried about everything that i could find through google searches reinstalling replacing drivers reinstalling qcma and none of them fix the problem i checked through the issues posted here and none of them seemed to have the same problem as i did i tried to retrieve a log but whenever i opened it in either console or the log mode i found in it would refuse to connect it still connects in the regular mode please send help | 1 |
147,009 | 19,479,602,537 | IssuesEvent | 2021-12-25 01:02:40 | venkateshreddypala/CSCI-6040 | https://api.github.com/repos/venkateshreddypala/CSCI-6040 | opened | CVE-2021-3828 (High) detected in nltk-3.4.4.zip | security vulnerability | ## CVE-2021-3828 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nltk-3.4.4.zip</b></p></summary>
<p>Natural Language Toolkit</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/87/16/4d247e27c55a7b6412e7c4c86f2500ae61afcbf5932b9e3491f8462f8d9e/nltk-3.4.4.zip">https://files.pythonhosted.org/packages/87/16/4d247e27c55a7b6412e7c4c86f2500ae61afcbf5932b9e3491f8462f8d9e/nltk-3.4.4.zip</a></p>
<p>Path to dependency file: /CSCI-6040/requirements.txt</p>
<p>Path to vulnerable library: teSource-ArchiveExtractor_7cf2688c-a52e-4a69-805f-333d4888c994/20190714194136_63331/20190714193948_depth_0/1/nltk-3.4.4/nltk-3.4.4</p>
<p>
Dependency Hierarchy:
- :x: **nltk-3.4.4.zip** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nltk is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3828>CVE-2021-3828</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3828">https://nvd.nist.gov/vuln/detail/CVE-2021-3828</a></p>
<p>Release Date: 2021-09-27</p>
<p>Fix Resolution: nltk - 3.6.4;nltk - 3.6.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3828 (High) detected in nltk-3.4.4.zip - ## CVE-2021-3828 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>nltk-3.4.4.zip</b></p></summary>
<p>Natural Language Toolkit</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/87/16/4d247e27c55a7b6412e7c4c86f2500ae61afcbf5932b9e3491f8462f8d9e/nltk-3.4.4.zip">https://files.pythonhosted.org/packages/87/16/4d247e27c55a7b6412e7c4c86f2500ae61afcbf5932b9e3491f8462f8d9e/nltk-3.4.4.zip</a></p>
<p>Path to dependency file: /CSCI-6040/requirements.txt</p>
<p>Path to vulnerable library: teSource-ArchiveExtractor_7cf2688c-a52e-4a69-805f-333d4888c994/20190714194136_63331/20190714193948_depth_0/1/nltk-3.4.4/nltk-3.4.4</p>
<p>
Dependency Hierarchy:
- :x: **nltk-3.4.4.zip** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nltk is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3828>CVE-2021-3828</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-3828">https://nvd.nist.gov/vuln/detail/CVE-2021-3828</a></p>
<p>Release Date: 2021-09-27</p>
<p>Fix Resolution: nltk - 3.6.4;nltk - 3.6.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in nltk zip cve high severity vulnerability vulnerable library nltk zip natural language toolkit library home page a href path to dependency file csci requirements txt path to vulnerable library tesource archiveextractor depth nltk nltk dependency hierarchy x nltk zip vulnerable library vulnerability details nltk is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nltk nltk step up your open source security game with whitesource | 0 |
4,595 | 23,830,747,085 | IssuesEvent | 2022-09-05 20:26:22 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Data Explorer frontend - Live demo readiness | work: frontend status: ready restricted: maintainers type: meta | The following checklist is for the live demo readiness of Data Explorer
- [ ] [New Explorations should not auto-save. Editing an existing exploration should auto-save](https://github.com/centerofci/mathesar/issues/1590)
- [ ] Use the same `display_options` for columns present in tables, for explorations
- [ ] Restrict columns from being deleted when a transformation is added to them
- [ ] Show column information for virtual columns
- [ ] `_array` mathesar type
- [ ] Type specific handling for items within array
- [ ] Handle filtering, grouping
- Fix column type calculation bug for summarized columns
- [ ] [Provide targeted error messages when tables & columns involved in an exploration are no longer present](https://github.com/centerofci/mathesar/issues/1596)
- [ ] Allow summarization by multiple fields when moving from a table to Data Explorer which is grouped by multiple columns
- Currently the user can only summarize a table which is grouped by a single column | True | Data Explorer frontend - Live demo readiness - The following checklist is for the live demo readiness of Data Explorer
- [ ] [New Explorations should not auto-save. Editing an existing exploration should auto-save](https://github.com/centerofci/mathesar/issues/1590)
- [ ] Use the same `display_options` for columns present in tables, for explorations
- [ ] Restrict columns from being deleted when a transformation is added to them
- [ ] Show column information for virtual columns
- [ ] `_array` mathesar type
- [ ] Type specific handling for items within array
- [ ] Handle filtering, grouping
- Fix column type calculation bug for summarized columns
- [ ] [Provide targeted error messages when tables & columns involved in an exploration are no longer present](https://github.com/centerofci/mathesar/issues/1596)
- [ ] Allow summarization by multiple fields when moving from a table to Data Explorer which is grouped by multiple columns
- Currently the user can only summarize a table which is grouped by a single column | main | data explorer frontend live demo readiness the following checklist is for the live demo readiness of data explorer use the same display options for columns present in tables for explorations restrict columns from being deleted when a transformation is added to them show column information for virtual columns array mathesar type type specific handling for items within array handle filtering grouping fix column type calculation bug for summarized columns allow summarization by multiple fields when moving from a table to data explorer which is grouped by multiple columns currently the user can only summarize a table which is grouped by a single column | 1 |
39,865 | 6,777,755,117 | IssuesEvent | 2017-10-28 00:42:12 | Robo3D/support.robo3d.com | https://api.github.com/repos/Robo3D/support.robo3d.com | closed | R1+ Docs changes | Documentation invalid | Image for R1+ that is similar to the headers images of R2 and C2 (with logo)
@JonathanWegner can you provide an image for the R1 similar to the R2 and C2 header images (here is an example: http://docs.robo3d.com/en/latest/R2/index.html)
The current R1+ image is just the printer, no other branding (http://support-site.readthedocs.io/en/dev/R1Plus/index.html) | 1.0 | R1+ Docs changes - Image for R1+ that is similar to the headers images of R2 and C2 (with logo)
@JonathanWegner can you provide an image for the R1 similar to the R2 and C2 header images (here is an example: http://docs.robo3d.com/en/latest/R2/index.html)
The current R1+ image is just the printer, no other branding (http://support-site.readthedocs.io/en/dev/R1Plus/index.html) | non_main | docs changes image for that is similar to the headers images of and with logo jonathanwegner can you provide an image for the similar to the and header images here is an example the current image is just the printer no other branding | 0 |
133,056 | 18,796,581,223 | IssuesEvent | 2021-11-08 23:17:26 | cagov/design-system | https://api.github.com/repos/cagov/design-system | closed | code centralization: publish footer component | Component CA Design System Component - footer | The footer component is currently used in both headless cannabis and drought sites. The code required to render it should be published to npm with a readme describing the module and the required package files. The code for the component should be checked into the design-system repository and deleted from the cannabis and drought repositories. The feature should be used in both sites via npm install. All sass variables not defined inside this component should be replaced with css variables. These variables should be defined in the site code that installs this component
This is a first step towards making this component a member of the design system. We are publishing it in a central location now to prevent code drift. | 1.0 | code centralization: publish footer component - The footer component is currently used in both headless cannabis and drought sites. The code required to render it should be published to npm with a readme describing the module and the required package files. The code for the component should be checked into the design-system repository and deleted from the cannabis and drought repositories. The feature should be used in both sites via npm install. All sass variables not defined inside this component should be replaced with css variables. These variables should be defined in the site code that installs this component
This is a first step towards making this component a member of the design system. We are publishing it in a central location now to prevent code drift. | non_main | code centralization publish footer component the footer component is currently used in both headless cannabis and drought sites the code required to render it should be published to npm with a readme describing the module and the required package files the code for the component should be checked into the design system repository and deleted from the cannabis and drought repositories the feature should be used in both sites via npm install all sass variables not defined inside this component should be replaced with css variables these variables should be defined in the site code that installs this component this is a first step towards making this component a member of the design system we are publishing it in a central location now to prevent code drift | 0 |
158,052 | 6,020,995,184 | IssuesEvent | 2017-06-07 17:42:10 | jaredpalmer/razzle | https://api.github.com/repos/jaredpalmer/razzle | closed | Importing Font Awesome css | bug priority: medium | I am trying to extend Razzle to handle font awesome. Font awesome requires a ?v= as part of the path (as discussed here: https://github.com/facebookincubator/create-react-app/issues/295). Razzle appears to break in the same way discussed in that issue when trying to import font-awesome.css. It seems the fix is to add regex to the loader (https://github.com/facebookincubator/create-react-app/pull/298#discussion-diff-72889071L76).
Just for reference, I am working off of the with-typescript example project. Thanks. | 1.0 | Importing Font Awesome css - I am trying to extend Razzle to handle font awesome. Font awesome requires a ?v= as part of the path (as discussed here: https://github.com/facebookincubator/create-react-app/issues/295). Razzle appears to break in the same way discussed in that issue when trying to import font-awesome.css. It seems the fix is to add regex to the loader (https://github.com/facebookincubator/create-react-app/pull/298#discussion-diff-72889071L76).
Just for reference, I am working off of the with-typescript example project. Thanks. | non_main | importing font awesome css i am trying to extend razzle to handle font awesome font awesome requires a v as part of the path as discussed here razzle appears to break in the same way discussed in that issue when trying to import font awesome css it seems the fix is to add regex to the loader just for reference i am working off of the with typescript example project thanks | 0 |
4,026 | 18,797,520,186 | IssuesEvent | 2021-11-09 00:56:10 | scott-ainsworth/new-avahi-aliases | https://api.github.com/repos/scott-ainsworth/new-avahi-aliases | closed | Flatten the `aliases` module | maintainability | The `aliases` module has only two structs yet the module is in a subdirectory. Consider moving the structs directly under `src/`. | True | Flatten the `aliases` module - The `aliases` module has only two structs yet the module is in a subdirectory. Consider moving the structs directly under `src/`. | main | flatten the aliases module the aliases module has only two structs yet the module is in a subdirectory consider moving the structs directly under src | 1 |
1,656 | 6,574,034,331 | IssuesEvent | 2017-09-11 11:11:04 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | lineinfile insertafter=EOF, Replace last line instead of insert after last line | affects_2.3 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
lineinfile insertafter=EOF
http://docs.ansible.com/ansible/lineinfile_module.html
ansible 2.1.0.0
Ansible Configuration: default
CentOS Linux release 7.2.1511 (Core) 3.10.0-327.36.1.el7.x86_64
##### SUMMARY
BUG: The lastline is deleted and replaced, with lineinfile insertafter=EOF
Want to add a new string to the end of the file, but the result is that the lastline is deleted and replaced. Replace last line instead of insert after last line.
##### STEPS TO REPRODUCE
/etc/crontab
*/45 * * * * root /sbin/blabla
*/45 * * * * root /sbin/tratra
Playbook section:
- name: add at the end of the file /etc/crontab rhn_check
lineinfile: dest=/etc/crontab
regexp=''
insertafter=EOF
line='*/60 * * * * root /sbin/rhn_chk'
##### EXPECTED RESULTS
Line added to the crontab
*/45 * * * * root /sbin/blabla
*/45 * * * * root /sbin/tratra
*/60 * * * * root /sbin/rhn_chk
##### ACTUAL RESULTS
/etc/crontab
*/45 * * * * root /sbin/blabla
*/60 * * * * root /sbin/rhn_chk
Last line in the file deleted and replaced
| True | lineinfile insertafter=EOF, Replace last line instead of insert after last line - ##### ISSUE TYPE
- Bug Report
lineinfile insertafter=EOF
http://docs.ansible.com/ansible/lineinfile_module.html
ansible 2.1.0.0
Ansible Configuration: default
CentOS Linux release 7.2.1511 (Core) 3.10.0-327.36.1.el7.x86_64
##### SUMMARY
BUG: The lastline is deleted and replaced, with lineinfile insertafter=EOF
Want to add a new string to the end of the file, but the result is that the lastline is deleted and replaced. Replace last line instead of insert after last line.
##### STEPS TO REPRODUCE
/etc/crontab
*/45 * * * * root /sbin/blabla
*/45 * * * * root /sbin/tratra
Playbook section:
- name: add at the end of the file /etc/crontab rhn_check
lineinfile: dest=/etc/crontab
regexp=''
insertafter=EOF
line='*/60 * * * * root /sbin/rhn_chk'
##### EXPECTED RESULTS
Line added to the crontab
*/45 * * * * root /sbin/blabla
*/45 * * * * root /sbin/tratra
*/60 * * * * root /sbin/rhn_chk
##### ACTUAL RESULTS
/etc/crontab
*/45 * * * * root /sbin/blabla
*/60 * * * * root /sbin/rhn_chk
Last line in the file deleted and replaced
| main | lineinfile insertafter eof replace last line instead of insert after last line issue type bug report lineinfile insertafter eof ansible ansible configuration default centos linux release core summary bug the lastline is deleted and replaced with lineinfile insertafter eof want to add a new string to the end of the file but the result is that the lastline is deleted and replaced replace last line instead of insert after last line steps to reproduce etc crontab root sbin blabla root sbin tratra playbook section name add at the end of the file etc crontab rhn check lineinfile dest etc crontab regexp insertafter eof line root sbin rhn chk expected results line added to the crontab root sbin blabla root sbin tratra root sbin rhn chk actual results etc crontab root sbin blabla root sbin rhn chk last line in the file deleted and replaced | 1 |
71,888 | 30,922,063,923 | IssuesEvent | 2023-08-06 02:37:11 | Zahlungsmittel/Zahlungsmittel | https://api.github.com/repos/Zahlungsmittel/Zahlungsmittel | opened | 🐛 [Bug][Frontend] Redirect to email link after login | service: wallet frontend bug imported | <a href="https://github.com/Elweyn"><img src="https://avatars.githubusercontent.com/u/33051975?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [Elweyn](https://github.com/Elweyn)**
_Thursday Jul 06, 2023 at 08:59 GMT_
_Originally opened as https://github.com/gradido/gradido/issues/3130_
----
<!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates -->
## 🐛 Bugreport
<!-- Describe your issue in detail. Include screenshots if needed. Give us as much information as possible. Use a clear and concise description of what the bug is.-->
Link sends to the application but most of the time user are logout and are asked to login,
after the login the user is not redirected to the link.
## 🤖 ToDos
- [ ] Link address redirect to the right address after login
| 1.0 | 🐛 [Bug][Frontend] Redirect to email link after login - <a href="https://github.com/Elweyn"><img src="https://avatars.githubusercontent.com/u/33051975?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [Elweyn](https://github.com/Elweyn)**
_Thursday Jul 06, 2023 at 08:59 GMT_
_Originally opened as https://github.com/gradido/gradido/issues/3130_
----
<!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates -->
## 🐛 Bugreport
<!-- Describe your issue in detail. Include screenshots if needed. Give us as much information as possible. Use a clear and concise description of what the bug is.-->
Link sends to the application but most of the time user are logout and are asked to login,
after the login the user is not redirected to the link.
## 🤖 ToDos
- [ ] Link address redirect to the right address after login
| non_main | 🐛 redirect to email link after login issue by thursday jul at gmt originally opened as 🐛 bugreport link sends to the application but most of the time user are logout and are asked to login after the login the user is not redirected to the link 🤖 todos link address redirect to the right address after login | 0 |
1,330 | 5,707,446,031 | IssuesEvent | 2017-04-18 13:54:56 | zuazo/dockerspec | https://api.github.com/repos/zuazo/dockerspec | closed | Cannot start bash at run - fail the test | Status: Maintainer Review Needed Type: Question | ### Dockerspec Version
0.4.1
### Ruby Version
ruby 2.3.4p301 (2017-03-30 revision 58214) [x86_64-linux]
### Platform Details
Alpine Linux v3.4
### Scenario
Try to build and test a Dockerfile with Node installed - the image does not start any service, so it's killed before the test can run
### Steps to Reproduce
Create a Docker file with NodeJS installed and no CMD
Start a little test :
```ruby
require 'dockerspec/serverspec'
describe 'My Dockerfile' do
describe docker_build('../.') do
it { should have_user 'nobody' }
describe docker_run(described_image) do
describe package('nodejs') do
it { should be_installed }
end
end
end
end
```
### Expected Result
nodejs IS installed
### Actual Result
Got an error because docker not running :
> 2) My Dockerfile Docker Build from path: "/projectfiles" Serverspec on tag: "94b3e9e391dc" Package "nodejs" should be installed
> Failure/Error: it { should be_installed }
> Docker::Error::NotFoundError:
> No such exec instance '27f4a7b4131e13343f0654fc2a3f38596209271e55cffa6c687aedd5e45a5823' found in daemon
So the real question is :
**How to start a bash in container to keep it alive when the test run ? (and kill all after ;))** | True | Cannot start bash at run - fail the test - ### Dockerspec Version
0.4.1
### Ruby Version
ruby 2.3.4p301 (2017-03-30 revision 58214) [x86_64-linux]
### Platform Details
Alpine Linux v3.4
### Scenario
Try to build and test a Dockerfile with Node installed - the image does not start any service, so it's killed before the test can run
### Steps to Reproduce
Create a Docker file with NodeJS installed and no CMD
Start a little test :
```ruby
require 'dockerspec/serverspec'
describe 'My Dockerfile' do
describe docker_build('../.') do
it { should have_user 'nobody' }
describe docker_run(described_image) do
describe package('nodejs') do
it { should be_installed }
end
end
end
end
```
### Expected Result
nodejs IS installed
### Actual Result
Got an error because docker not running :
> 2) My Dockerfile Docker Build from path: "/projectfiles" Serverspec on tag: "94b3e9e391dc" Package "nodejs" should be installed
> Failure/Error: it { should be_installed }
> Docker::Error::NotFoundError:
> No such exec instance '27f4a7b4131e13343f0654fc2a3f38596209271e55cffa6c687aedd5e45a5823' found in daemon
So the real question is :
**How to start a bash in container to keep it alive when the test run ? (and kill all after ;))** | main | cannot start bash at run fail the test dockerspec version ruby version ruby revision platform details alpine linux scenario try to build and test a dockerfile with node installed the image does not start any service so it s killed before the test can run steps to reproduce create a docker file with nodejs installed and no cmd start a little test ruby require dockerspec serverspec describe my dockerfile do describe docker build do it should have user nobody describe docker run described image do describe package nodejs do it should be installed end end end end expected result nodejs is installed actual result got an error because docker not running my dockerfile docker build from path projectfiles serverspec on tag package nodejs should be installed failure error it should be installed docker error notfounderror no such exec instance found in daemon so the real question is how to start a bash in container to keep it alive when the test run and kill all after | 1 |
4,374 | 22,240,407,602 | IssuesEvent | 2022-06-09 04:15:30 | cncf/glossary | https://api.github.com/repos/cncf/glossary | closed | Visualization bug in navbar-lang-selector | maintainers | ### The current behavior:
When a user clicks on the language selector in the navigation bar, it is displayed partially hidden in some languages.
<img width="717" alt="Screenshot 2022-06-08 at 09 49 54" src="https://user-images.githubusercontent.com/9283164/172722237-40b4afe7-e438-4f93-8dc0-1d2f40fc357d.png">
### The expected behavior:
The language selector in the navigation bar is displayed well when it shows the language list.
### How to reproduce it:
Clicking on the language selector in the navigation bar
| True | Visualization bug in navbar-lang-selector - ### The current behavior:
When a user clicks on the language selector in the navigation bar, it is displayed partially hidden in some languages.
<img width="717" alt="Screenshot 2022-06-08 at 09 49 54" src="https://user-images.githubusercontent.com/9283164/172722237-40b4afe7-e438-4f93-8dc0-1d2f40fc357d.png">
### The expected behavior:
The language selector in the navigation bar is displayed well when it shows the language list.
### How to reproduce it:
Clicking on the language selector in the navigation bar
| main | visualization bug in navbar lang selector the current behavior when a user clicks on the language selector in the navigation bar it is displayed partially hidden in some languages img width alt screenshot at src the expected behavior the language selector in the navigation bar is displayed well when it shows the language list how to reproduce it clicking on the language selector in the navigation bar | 1 |
3,582 | 14,394,527,924 | IssuesEvent | 2020-12-03 01:30:16 | diofant/diofant | https://api.github.com/repos/diofant/diofant | closed | Change convention for indexing coefficients of DUP | maintainability performance polys | For legacy reasons, the Diofant uses SymPy's way to represent the dense univariate polynomial DUP), namely - the leading coefficient (LC) is coming first. I doubt, that this indexing gives some benefits from the performance point of view (simple exponent shift?), but clearly complicates things while porting published algorithms, which usually (always?) assume that the index 0 coefficient is the terminal coefficient (TC), not the leading one.
I think, it's a good time to change the convention. And, maybe, restore `PolyElement.from_list()` function to make conversion from the dense recursive representation (former DMP case). | True | Change convention for indexing coefficients of DUP - For legacy reasons, the Diofant uses SymPy's way to represent the dense univariate polynomial DUP), namely - the leading coefficient (LC) is coming first. I doubt, that this indexing gives some benefits from the performance point of view (simple exponent shift?), but clearly complicates things while porting published algorithms, which usually (always?) assume that the index 0 coefficient is the terminal coefficient (TC), not the leading one.
I think, it's a good time to change the convention. And, maybe, restore `PolyElement.from_list()` function to make conversion from the dense recursive representation (former DMP case). | main | change convention for indexing coefficients of dup for legacy reasons the diofant uses sympy s way to represent the dense univariate polynomial dup namely the leading coefficient lc is coming first i doubt that this indexing gives some benefits from the performance point of view simple exponent shift but clearly complicates things while porting published algorithms which usually always assume that the index coefficient is the terminal coefficient tc not the leading one i think it s a good time to change the convention and maybe restore polyelement from list function to make conversion from the dense recursive representation former dmp case | 1 |
60,038 | 8,401,632,402 | IssuesEvent | 2018-10-11 02:08:42 | SoftStackFactory/reboot | https://api.github.com/repos/SoftStackFactory/reboot | opened | Docs: Timeline component, usage, etc. | documentation | Write documentation on the timeline component and its usage. Include any relevant information on how to set it up or the data structures & logic involved.
Write the documentation here:
https://github.com/SoftStackFactory/reboot/wiki/Timeline-Component | 1.0 | Docs: Timeline component, usage, etc. - Write documentation on the timeline component and its usage. Include any relevant information on how to set it up or the data structures & logic involved.
Write the documentation here:
https://github.com/SoftStackFactory/reboot/wiki/Timeline-Component | non_main | docs timeline component usage etc write documentation on the timeline component and its usage include any relevant information on how to set it up or the data structures logic involved write the documentation here | 0 |
44,153 | 2,899,627,412 | IssuesEvent | 2015-06-17 12:36:48 | leo-project/leofs | https://api.github.com/repos/leo-project/leofs | closed | [leo_object_storage] Find incorrect some data-blocks during the data-compaction | Bug Improve Priority-HIGH _leo_object_storage _leo_storage | Last Friday, I found incorrect some data-blocks at some Leo Storage's AVS files as below:
### Error log
```
[E] storage_0@127.0.0.1 2015-06-15 00:45:15.130856 +0000 1434329115 leo_compact_fsm_worker:execute_1/4 1036 [{obj_container_path,"/var/lib/jenkins/jobs/Tests_for_aws-sdk_clients_with_LeoFS/workspace/leofs/package/leo_storage_0/avs/object/2.avs_63601547870"},{error_pos_start,577},{error_pos_end,739},{errors,[{invalid_format,unexpected_time_format}]}]
[E] storage_0@127.0.0.1 2015-06-15 00:45:15.132416 +0000 1434329115 leo_compact_fsm_worker:execute_1/4 1036 [{obj_container_path,"/var/lib/jenkins/jobs/Tests_for_aws-sdk_clients_with_LeoFS/workspace/leofs/package/leo_storage_0/avs/object/1.avs_63601547870"},{error_pos_start,586},{error_pos_end,749},{errors,[{invalid_format,unexpected_time_format}]}]
[E] storage_0@127.0.0.1 2015-06-15 00:45:15.169797 +0000 1434329115 leo_compact_fsm_worker:execute_1/4 1036 [{obj_container_path,"/var/lib/jenkins/jobs/Tests_for_aws-sdk_clients_with_LeoFS/workspace/leofs/package/leo_storage_0/avs/object/1.avs_63601547870"},{error_pos_start,945},{error_pos_end,1108},{errors,[{invalid_format,unexpected_time_format}]}]
[E] storage_0@127.0.0.1 2015-06-15 00:45:15.668450 +0000 1434329115 leo_compact_fsm_worker:execute_1/4 1036 [{obj_container_path,"/var/lib/jenkins/jobs/Tests_for_aws-sdk_clients_with_LeoFS/workspace/leofs/package/leo_storage_0/avs/object/1.avs_63601547870"},{error_pos_start,31501},{error_pos_end,31652},{errors,[{invalid_format,unexpected_time_format}]}]
```
And then, I've recognised S3 Client OR Leo Storage does not set correctly timestamp of an object when removing it as below:
```erlang
%% Error log during the compaction:
[E] storage_0@127.0.0.1 2015-06-16 01:49:02.324330 +0000 1434419342 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test7877/testFile s.org\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:23.739688 +0000 1434419363 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:23.766323 +0000 1434419363 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile.copy\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:29.770379 +0000 1434419369 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile\n2">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:29.774589 +0000 1434419369 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:29.786912 +0000 1434419369 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile.copy\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:29.800619 +0000 1434419369 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile.single\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
```
### Take a measure
* Avoid inserting incorrect data into a AVS file and Set correct timestamp of an object as well as a metadata
| 1.0 | [leo_object_storage] Find incorrect some data-blocks during the data-compaction - Last Friday, I found incorrect some data-blocks at some Leo Storage's AVS files as below:
### Error log
```
[E] storage_0@127.0.0.1 2015-06-15 00:45:15.130856 +0000 1434329115 leo_compact_fsm_worker:execute_1/4 1036 [{obj_container_path,"/var/lib/jenkins/jobs/Tests_for_aws-sdk_clients_with_LeoFS/workspace/leofs/package/leo_storage_0/avs/object/2.avs_63601547870"},{error_pos_start,577},{error_pos_end,739},{errors,[{invalid_format,unexpected_time_format}]}]
[E] storage_0@127.0.0.1 2015-06-15 00:45:15.132416 +0000 1434329115 leo_compact_fsm_worker:execute_1/4 1036 [{obj_container_path,"/var/lib/jenkins/jobs/Tests_for_aws-sdk_clients_with_LeoFS/workspace/leofs/package/leo_storage_0/avs/object/1.avs_63601547870"},{error_pos_start,586},{error_pos_end,749},{errors,[{invalid_format,unexpected_time_format}]}]
[E] storage_0@127.0.0.1 2015-06-15 00:45:15.169797 +0000 1434329115 leo_compact_fsm_worker:execute_1/4 1036 [{obj_container_path,"/var/lib/jenkins/jobs/Tests_for_aws-sdk_clients_with_LeoFS/workspace/leofs/package/leo_storage_0/avs/object/1.avs_63601547870"},{error_pos_start,945},{error_pos_end,1108},{errors,[{invalid_format,unexpected_time_format}]}]
[E] storage_0@127.0.0.1 2015-06-15 00:45:15.668450 +0000 1434329115 leo_compact_fsm_worker:execute_1/4 1036 [{obj_container_path,"/var/lib/jenkins/jobs/Tests_for_aws-sdk_clients_with_LeoFS/workspace/leofs/package/leo_storage_0/avs/object/1.avs_63601547870"},{error_pos_start,31501},{error_pos_end,31652},{errors,[{invalid_format,unexpected_time_format}]}]
```
And then, I've recognised S3 Client OR Leo Storage does not set correctly timestamp of an object when removing it as below:
```erlang
%% Error log during the compaction:
[E] storage_0@127.0.0.1 2015-06-16 01:49:02.324330 +0000 1434419342 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test7877/testFile s.org\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:23.739688 +0000 1434419363 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:23.766323 +0000 1434419363 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile.copy\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:29.770379 +0000 1434419369 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile\n2">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:29.774589 +0000 1434419369 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:29.786912 +0000 1434419369 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile.copy\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
[E] storage_0@127.0.0.1 2015-06-16 01:49:29.800619 +0000 1434419369 leo_object_storage_haystack:put_fun_2/3 586 [{key,<<"test/testFile.single\n1">>},{del,1},{timestamp,0},{cause,"Not set timestamp correctly"}]
```
### Take a measure
* Avoid inserting incorrect data into a AVS file and Set correct timestamp of an object as well as a metadata
| non_main | find incorrect some data blocks during the data compaction last friday i found incorrect some data blocks at some leo storage s avs files as below error log storage leo compact fsm worker execute storage leo compact fsm worker execute storage leo compact fsm worker execute storage leo compact fsm worker execute and then i ve recognised client or leo storage does not set correctly timestamp of an object when removing it as below erlang error log during the compaction storage leo object storage haystack put fun storage leo object storage haystack put fun storage leo object storage haystack put fun storage leo object storage haystack put fun storage leo object storage haystack put fun storage leo object storage haystack put fun storage leo object storage haystack put fun take a measure avoid inserting incorrect data into a avs file and set correct timestamp of an object as well as a metadata | 0 |
5,868 | 31,836,059,885 | IssuesEvent | 2023-09-14 13:35:42 | GaloyMoney/galoy | https://api.github.com/repos/GaloyMoney/galoy | closed | switch to new graphql subscription protocol | graphql api maintainability | We are using the old, deprecated and no longer maintained, protocol.
https://www.apollographql.com/docs/apollo-server/data/subscriptions/#switching-from-subscriptions-transport-ws
this requires updating both backend and frontend at the same time. | True | switch to new graphql subscription protocol - We are using the old, deprecated and no longer maintained, protocol.
https://www.apollographql.com/docs/apollo-server/data/subscriptions/#switching-from-subscriptions-transport-ws
this requires updating both backend and frontend at the same time. | main | switch to new graphql subscription protocol we are using the old deprecated and no longer maintained protocol this requires updating both backend and frontend at the same time | 1 |
767,296 | 26,917,996,739 | IssuesEvent | 2023-02-07 08:22:24 | telstra/open-kilda | https://api.github.com/repos/telstra/open-kilda | reopened | Switch validation should not fail if logical port information cannot be received | priority/3-normal | Current behavior:
1. Validate switch
2. If switch supports logical ports, but has issues with its grpc connection, then the whole validation operation fails.
Desired behavior:
Return validation information regardless of having proper grpc access. If logical ports section cannot be populated, it should show corresponding error description only for "logical ports" section. Rules and Meters information should still be populated. | 1.0 | Switch validation should not fail if logical port information cannot be received - Current behavior:
1. Validate switch
2. If switch supports logical ports, but has issues with its grpc connection, then the whole validation operation fails.
Desired behavior:
Return validation information regardless of having proper grpc access. If logical ports section cannot be populated, it should show corresponding error description only for "logical ports" section. Rules and Meters information should still be populated. | non_main | switch validation should not fail if logical port information cannot be received current behavior validate switch if switch supports logical ports but has issues with its grpc connection then the whole validation operation fails desired behavior return validation information regardless of having proper grpc access if logical ports section cannot be populated it should show corresponding error description only for logical ports section rules and meters information should still be populated | 0 |
72,199 | 7,290,626,377 | IssuesEvent | 2018-02-24 03:58:53 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Assert() invariants? | area/kubelet area/test lifecycle/rotten priority/awaiting-more-evidence sig/testing | Kubelet tests would stay green more often if I could easily assert some invariants at key places (e.g. pod.UID != ""). But this is really only for tests. Ideas welcome on how to do this in a sane way, or why not to do it at all.
| 2.0 | Assert() invariants? - Kubelet tests would stay green more often if I could easily assert some invariants at key places (e.g. pod.UID != ""). But this is really only for tests. Ideas welcome on how to do this in a sane way, or why not to do it at all.
| non_main | assert invariants kubelet tests would stay green more often if i could easily assert some invariants at key places e g pod uid but this is really only for tests ideas welcome on how to do this in a sane way or why not to do it at all | 0 |
1,491 | 6,425,710,074 | IssuesEvent | 2017-08-09 15:54:47 | DynamoRIO/dynamorio | https://api.github.com/repos/DynamoRIO/dynamorio | opened | remove guard page counts from heap and cache runtime option values | Maintainability Type-Feature | For heap and cache unit sizes specified in runtime options, we take guard pages
out of the requested size: so asking for 64K gives only 56K of usable space for
4K pages. The original logic was tuned for Windows without -vm_reserve where
the OS allocation granularity matters and we don't want to waste space.
On UNIX, however, with the new 4K (or page size) vmm blocks, and with
-vm_reserve covering most allocations at least for smaller applications, the OS
granularity is less important: xref i#2597.
Having the guards included makes it difficult to tune the default sizes based on
actual usage (and even more so when guard pages are sometimes turned off). This
isssue covers making the heap and cache sizes like the cache sizes where the
guard pages are added on top of the requested size. (In i#2592 I removed the
debug-build STACK_GUARD_PAGE which was removing a page from the given stack size
to make a guard page: now it matches release where what you ask for is the
usable size you get, for stacks.)
| True | remove guard page counts from heap and cache runtime option values - For heap and cache unit sizes specified in runtime options, we take guard pages
out of the requested size: so asking for 64K gives only 56K of usable space for
4K pages. The original logic was tuned for Windows without -vm_reserve where
the OS allocation granularity matters and we don't want to waste space.
On UNIX, however, with the new 4K (or page size) vmm blocks, and with
-vm_reserve covering most allocations at least for smaller applications, the OS
granularity is less important: xref i#2597.
Having the guards included makes it difficult to tune the default sizes based on
actual usage (and even more so when guard pages are sometimes turned off). This
isssue covers making the heap and cache sizes like the cache sizes where the
guard pages are added on top of the requested size. (In i#2592 I removed the
debug-build STACK_GUARD_PAGE which was removing a page from the given stack size
to make a guard page: now it matches release where what you ask for is the
usable size you get, for stacks.)
| main | remove guard page counts from heap and cache runtime option values for heap and cache unit sizes specified in runtime options we take guard pages out of the requested size so asking for gives only of usable space for pages the original logic was tuned for windows without vm reserve where the os allocation granularity matters and we don t want to waste space on unix however with the new or page size vmm blocks and with vm reserve covering most allocations at least for smaller applications the os granularity is less important xref i having the guards included makes it difficult to tune the default sizes based on actual usage and even more so when guard pages are sometimes turned off this isssue covers making the heap and cache sizes like the cache sizes where the guard pages are added on top of the requested size in i i removed the debug build stack guard page which was removing a page from the given stack size to make a guard page now it matches release where what you ask for is the usable size you get for stacks | 1 |
18,411 | 10,111,144,597 | IssuesEvent | 2019-07-30 12:04:09 | Automattic/jetpack | https://api.github.com/repos/Automattic/jetpack | closed | Stats: store stats in transient instead of Jetpack option | Performance Stats WPCOM API [Pri] High [Type] Bug | Consider using transients to store data collected via `stats_get_from_restapi`:
https://github.com/Automattic/jetpack/blob/3ed3de86a58730377bbc29f8b7df7aa7729f28ce/modules/stats.php#L1632-L1689
This would improve performance when having to query for that data, especially on sites using a persistent object cache.
Related discussion:
- p1HpG7-7hA-p2 | True | Stats: store stats in transient instead of Jetpack option - Consider using transients to store data collected via `stats_get_from_restapi`:
https://github.com/Automattic/jetpack/blob/3ed3de86a58730377bbc29f8b7df7aa7729f28ce/modules/stats.php#L1632-L1689
This would improve performance when having to query for that data, especially on sites using a persistent object cache.
Related discussion:
- p1HpG7-7hA-p2 | non_main | stats store stats in transient instead of jetpack option consider using transients to store data collected via stats get from restapi this would improve performance when having to query for that data especially on sites using a persistent object cache related discussion | 0 |
18,748 | 3,703,780,821 | IssuesEvent | 2016-02-29 21:38:39 | couchbase/couchbase-lite-ios | https://api.github.com/repos/couchbase/couchbase-lite-ios | closed | CBLReplicator.isDocumentPending is inaccurate when replicator is offline | bug f: Replication hotfix review testneeded | It seems like`isDocumentPending` might not be working as one would expect it to when a device is completely offline.
# Setup to reproduce
* CBL (1.2 release)
* Sync Gateway (1.2 release)
* ToDoLite-iOS ([patched with minimally invasive sync status feature](https://github.com/PaulCapestany/ToDoLite-iOS/commit/5fdaf1d48904ad9a01e8a971c9d668b7bd522e4f) which displays synced text as **black**, and unsynced text as *grey*). Sidenote: relevant [PR](https://github.com/couchbaselabs/ToDoLite-iOS/pull/68).
[Follow these instructions in order to use Apple's "Network Link Conditioner"](http://nshipster.com/network-link-conditioner/) to test different connectivity qualities on both the simulator and on actual physical iOS devices.
In the ToDoLite-iOS project, change the `kSyncGatewayUrl` in `AppDelegate.m` if need be. Also, make sure that the `FacebookAppID` and `URL Scheme` in `Info.plist` is usable/accurate as well.
# Expected behavior
1. When on a good wifi connection, newly added or edited items in any of the tableViews should have **black text**.
2. When switching to the "Very Bad Network" setting in Apple's "Network Link Conditioner", newly added or edited items in any of the tableViews should have *grey text* if they haven't been successfully synced within 1.5 seconds (they should turn **black** again once the replication properly executes however).
3. When putting the testing device in 'Airplane Mode', newly added or edited items in any of the tableViews should have *grey text* since they're unable to sync, and should turn **black** again when back on a network connection.
# Actual behavior
1. Works as expected.
2. Works as expected.
3. Unexpected behavior. The text of newly added or edited items stays **black** even though items have not synced due to being in 'Airplane Mode' (and therefore should be *grey*).
| 1.0 | CBLReplicator.isDocumentPending is inaccurate when replicator is offline - It seems like`isDocumentPending` might not be working as one would expect it to when a device is completely offline.
# Setup to reproduce
* CBL (1.2 release)
* Sync Gateway (1.2 release)
* ToDoLite-iOS ([patched with minimally invasive sync status feature](https://github.com/PaulCapestany/ToDoLite-iOS/commit/5fdaf1d48904ad9a01e8a971c9d668b7bd522e4f) which displays synced text as **black**, and unsynced text as *grey*). Sidenote: relevant [PR](https://github.com/couchbaselabs/ToDoLite-iOS/pull/68).
[Follow these instructions in order to use Apple's "Network Link Conditioner"](http://nshipster.com/network-link-conditioner/) to test different connectivity qualities on both the simulator and on actual physical iOS devices.
In the ToDoLite-iOS project, change the `kSyncGatewayUrl` in `AppDelegate.m` if need be. Also, make sure that the `FacebookAppID` and `URL Scheme` in `Info.plist` is usable/accurate as well.
# Expected behavior
1. When on a good wifi connection, newly added or edited items in any of the tableViews should have **black text**.
2. When switching to the "Very Bad Network" setting in Apple's "Network Link Conditioner", newly added or edited items in any of the tableViews should have *grey text* if they haven't been successfully synced within 1.5 seconds (they should turn **black** again once the replication properly executes however).
3. When putting the testing device in 'Airplane Mode', newly added or edited items in any of the tableViews should have *grey text* since they're unable to sync, and should turn **black** again when back on a network connection.
# Actual behavior
1. Works as expected.
2. Works as expected.
3. Unexpected behavior. The text of newly added or edited items stays **black** even though items have not synced due to being in 'Airplane Mode' (and therefore should be *grey*).
| non_main | cblreplicator isdocumentpending is inaccurate when replicator is offline it seems like isdocumentpending might not be working as one would expect it to when a device is completely offline setup to reproduce cbl release sync gateway release todolite ios which displays synced text as black and unsynced text as grey sidenote relevant to test different connectivity qualities on both the simulator and on actual physical ios devices in the todolite ios project change the ksyncgatewayurl in appdelegate m if need be also make sure that the facebookappid and url scheme in info plist is usable accurate as well expected behavior when on a good wifi connection newly added or edited items in any of the tableviews should have black text when switching to the very bad network setting in apple s network link conditioner newly added or edited items in any of the tableviews should have grey text if they haven t been successfully synced within seconds they should turn black again once the replication properly executes however when putting the testing device in airplane mode newly added or edited items in any of the tableviews should have grey text since they re unable to sync and should turn black again when back on a network connection actual behavior works as expected works as expected unexpected behavior the text of newly added or edited items stays black even though items have not synced due to being in airplane mode and therefore should be grey | 0 |
3,415 | 13,182,086,056 | IssuesEvent | 2020-08-12 15:14:44 | duo-labs/cloudmapper | https://api.github.com/repos/duo-labs/cloudmapper | closed | Support lightsail | map unmaintained_functionality | Lightsail works differently than EC2. This should be supported for both CloudMapper prepare and public. | True | Support lightsail - Lightsail works differently than EC2. This should be supported for both CloudMapper prepare and public. | main | support lightsail lightsail works differently than this should be supported for both cloudmapper prepare and public | 1 |
2,340 | 8,373,060,009 | IssuesEvent | 2018-10-05 09:08:56 | Homebrew/homebrew-cask | https://api.github.com/repos/Homebrew/homebrew-cask | opened | Make a homebrew/cask-java tap? | awaiting maintainer feedback discussion | Current casks:
- `java`: OpenJDK 11
- `java8`: OracleJDK 8
- `oracle-jdk`: OracleJDK 11
- `java10`: OracleJDK 10
- `zulu`: ZuluJDK 10
- `zulu7`: ZuluJDK 7
- `zulu8`: ZuluJDK 8
- `zulu9`: ZuluJDK 9
Open PRs:
- `adoptopenjdk` JDK 11
- `sapmachine-jdk` JDK 11
Forthcoming PRs:
- `adoptopenjdk` 8/9/10
Dumping these all in one tap would make them easier to manage, I'm expecting more variants now that building JDKs is a thing. | True | Make a homebrew/cask-java tap? - Current casks:
- `java`: OpenJDK 11
- `java8`: OracleJDK 8
- `oracle-jdk`: OracleJDK 11
- `java10`: OracleJDK 10
- `zulu`: ZuluJDK 10
- `zulu7`: ZuluJDK 7
- `zulu8`: ZuluJDK 8
- `zulu9`: ZuluJDK 9
Open PRs:
- `adoptopenjdk` JDK 11
- `sapmachine-jdk` JDK 11
Forthcoming PRs:
- `adoptopenjdk` 8/9/10
Dumping these all in one tap would make them easier to manage, I'm expecting more variants now that building JDKs is a thing. | main | make a homebrew cask java tap current casks java openjdk oraclejdk oracle jdk oraclejdk oraclejdk zulu zulujdk zulujdk zulujdk zulujdk open prs adoptopenjdk jdk sapmachine jdk jdk forthcoming prs adoptopenjdk dumping these all in one tap would make them easier to manage i m expecting more variants now that building jdks is a thing | 1 |
236,651 | 18,104,773,220 | IssuesEvent | 2021-09-22 17:57:22 | operator-framework/operator-sdk | https://api.github.com/repos/operator-framework/operator-sdk | opened | Revise release docs regarding Netlify setup | kind/documentation | The instructions are a bit ambiguous and have required a bit of cleanup to get the docs working after each release. | 1.0 | Revise release docs regarding Netlify setup - The instructions are a bit ambiguous and have required a bit of cleanup to get the docs working after each release. | non_main | revise release docs regarding netlify setup the instructions are a bit ambiguous and have required a bit of cleanup to get the docs working after each release | 0 |
555 | 4,004,473,191 | IssuesEvent | 2016-05-12 07:28:32 | Particular/ServiceControl | https://api.github.com/repos/Particular/ServiceControl | closed | ServiceControl cannot be used through an ARR reverse proxy in ServicePulse | Size: S State: In Progress - Maintainer Prio Type: Bug | ## Symptoms
ServicePulse does not receive updates from the ServiceControl
## Who is affected
Any user attempting to install ServicePulse using ARR proxy in a non-root directory.
## POA
- [x] Update ServiceControl SignalR version to latest stable. #724
- [x] Smoke Test ServiceControl / ServicePulse
- [ ] Create new release of ServiceControl
- [ ] Announcement | True | ServiceControl cannot be used through an ARR reverse proxy in ServicePulse - ## Symptoms
ServicePulse does not receive updates from the ServiceControl
## Who is affected
Any user attempting to install ServicePulse using ARR proxy in a non-root directory.
## POA
- [x] Update ServiceControl SignalR version to latest stable. #724
- [x] Smoke Test ServiceControl / ServicePulse
- [ ] Create new release of ServiceControl
- [ ] Announcement | main | servicecontrol cannot be used through an arr reverse proxy in servicepulse symptoms servicepulse does not receive updates from the servicecontrol who is affected any user attempting to install servicepulse using arr proxy in a non root directory poa update servicecontrol signalr version to latest stable smoke test servicecontrol servicepulse create new release of servicecontrol announcement | 1 |
5,301 | 12,308,671,314 | IssuesEvent | 2020-05-12 07:39:45 | dotnet/docs | https://api.github.com/repos/dotnet/docs | closed | State Changes section example | :book: guide - Blazor :books: Area - .NET Architecture Guide :watch: Not Triaged | ```csharp
public class AppState
{
public string Message { get; }
// Lets components receive change notifications
public event Action OnChange;
public void UpdateMessage(string message)
{
shortlist.Add(itinerary);
NotifyStateChanged();
}
private void NotifyStateChanged() => OnChange?.Invoke();
}
```
Where does shortlist and itinerary comes from?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 60c195c0-99f6-64f5-66a3-4e69109dc8ee
* Version Independent ID: 24368808-4b90-f75a-e887-eb72a1c85f5b
* Content: [Build reusable UI components with Blazor](https://docs.microsoft.com/en-us/dotnet/architecture/blazor-for-web-forms-developers/components#feedback)
* Content Source: [docs/architecture/blazor-for-web-forms-developers/components.md](https://github.com/dotnet/docs/blob/master/docs/architecture/blazor-for-web-forms-developers/components.md)
* Product: **dotnet-architecture**
* Technology: **blazor**
* GitHub Login: @danroth27
* Microsoft Alias: **daroth** | 1.0 | State Changes section example - ```csharp
public class AppState
{
public string Message { get; }
// Lets components receive change notifications
public event Action OnChange;
public void UpdateMessage(string message)
{
shortlist.Add(itinerary);
NotifyStateChanged();
}
private void NotifyStateChanged() => OnChange?.Invoke();
}
```
Where does shortlist and itinerary comes from?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 60c195c0-99f6-64f5-66a3-4e69109dc8ee
* Version Independent ID: 24368808-4b90-f75a-e887-eb72a1c85f5b
* Content: [Build reusable UI components with Blazor](https://docs.microsoft.com/en-us/dotnet/architecture/blazor-for-web-forms-developers/components#feedback)
* Content Source: [docs/architecture/blazor-for-web-forms-developers/components.md](https://github.com/dotnet/docs/blob/master/docs/architecture/blazor-for-web-forms-developers/components.md)
* Product: **dotnet-architecture**
* Technology: **blazor**
* GitHub Login: @danroth27
* Microsoft Alias: **daroth** | non_main | state changes section example csharp public class appstate public string message get lets components receive change notifications public event action onchange public void updatemessage string message shortlist add itinerary notifystatechanged private void notifystatechanged onchange invoke where does shortlist and itinerary comes from document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product dotnet architecture technology blazor github login microsoft alias daroth | 0 |
162,171 | 12,625,770,921 | IssuesEvent | 2020-06-14 13:34:07 | Thy-Vipe/BeastsOfBermuda-issues | https://api.github.com/repos/Thy-Vipe/BeastsOfBermuda-issues | opened | [Quality of life] Default talent tree background when opening the talent menu | Quality of life UI tester-team | _Originally written by **TripTrap | 76561198378851871**_
Game Version: 1.1.938
*===== System Specs =====
CPU Brand: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
Vendor: GenuineIntel
GPU Brand: NVIDIA GeForce GTX 1070 Ti
GPU Driver Info: Unknown
Num CPU Cores: 6
===================*
Context: **N/A**
*Expected Results:* Talent tree background being the correct deity when opening the Talent Menu is opened
*Actual Results:* When the talent menu is opened the background will only update until a tick happens - unless you spend a talent point ofc.
*Replication:* Go on any creature, spend a telent point. Close the Talent Tree menu and reopen it again. Note that you will have the default background until a tick happens - unless you spend a talent point.
*Evidene:* https://drive.google.com/file/d/1QMMCb0a3mZVPUYXFuxbiA5it_jBgGaoR/view
[Original Report from Official] | 1.0 | [Quality of life] Default talent tree background when opening the talent menu - _Originally written by **TripTrap | 76561198378851871**_
Game Version: 1.1.938
*===== System Specs =====
CPU Brand: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
Vendor: GenuineIntel
GPU Brand: NVIDIA GeForce GTX 1070 Ti
GPU Driver Info: Unknown
Num CPU Cores: 6
===================*
Context: **N/A**
*Expected Results:* Talent tree background being the correct deity when opening the Talent Menu is opened
*Actual Results:* When the talent menu is opened the background will only update until a tick happens - unless you spend a talent point ofc.
*Replication:* Go on any creature, spend a telent point. Close the Talent Tree menu and reopen it again. Note that you will have the default background until a tick happens - unless you spend a talent point.
*Evidene:* https://drive.google.com/file/d/1QMMCb0a3mZVPUYXFuxbiA5it_jBgGaoR/view
[Original Report from Official] | non_main | default talent tree background when opening the talent menu originally written by triptrap game version system specs cpu brand intel r core tm cpu vendor genuineintel gpu brand nvidia geforce gtx ti gpu driver info unknown num cpu cores context n a expected results talent tree background being the correct deity when opening the talent menu is opened actual results when the talent menu is opened the background will only update until a tick happens unless you spend a talent point ofc replication go on any creature spend a telent point close the talent tree menu and reopen it again note that you will have the default background until a tick happens unless you spend a talent point evidene | 0 |
5,224 | 26,496,284,385 | IssuesEvent | 2023-01-18 06:07:49 | Kalkwst/MicroLib | https://api.github.com/repos/Kalkwst/MicroLib | closed | 🩹 Change error throwing to use the new built in methods. | maintainance in progress | We should change the exception throwing to the new built in methods provided by Microsoft.
https://learn.microsoft.com/en-us/dotnet/api/system.argumentexception.throwifnullorempty?view=net-7.0
https://learn.microsoft.com/en-us/dotnet/api/system.argumentnullexception.throwifnull?view=net-7.0#system-argumentnullexception-throwifnull(system-object-system-string)
The `ArgumentException.ThrowIfNullOrEmpty` was introduced in .NET 7 & `ArgumentNullException.ThrowIfNull` on .NET 6.
Example:
from
```cs
protected AbstractCollectionDecorator(ICollection<T> collection)
{
_collection = collection ?? throw new ArgumentNullException(nameof(collection));
}
```
to
```cs
protected AbstractCollectionDecorator(ICollection<T> collection)
{
ArgumentNullException.ThrowIfNull(collection);
_collection = collection;
}
``` | True | 🩹 Change error throwing to use the new built in methods. - We should change the exception throwing to the new built in methods provided by Microsoft.
https://learn.microsoft.com/en-us/dotnet/api/system.argumentexception.throwifnullorempty?view=net-7.0
https://learn.microsoft.com/en-us/dotnet/api/system.argumentnullexception.throwifnull?view=net-7.0#system-argumentnullexception-throwifnull(system-object-system-string)
The `ArgumentException.ThrowIfNullOrEmpty` was introduced in .NET 7 & `ArgumentNullException.ThrowIfNull` on .NET 6.
Example:
from
```cs
protected AbstractCollectionDecorator(ICollection<T> collection)
{
_collection = collection ?? throw new ArgumentNullException(nameof(collection));
}
```
to
```cs
protected AbstractCollectionDecorator(ICollection<T> collection)
{
ArgumentNullException.ThrowIfNull(collection);
_collection = collection;
}
``` | main | 🩹 change error throwing to use the new built in methods we should change the exception throwing to the new built in methods provided by microsoft the argumentexception throwifnullorempty was introduced in net argumentnullexception throwifnull on net example from cs protected abstractcollectiondecorator icollection collection collection collection throw new argumentnullexception nameof collection to cs protected abstractcollectiondecorator icollection collection argumentnullexception throwifnull collection collection collection | 1 |
273,223 | 20,777,301,599 | IssuesEvent | 2022-03-16 11:42:56 | gardener/docforge | https://api.github.com/repos/gardener/docforge | closed | Addressing versions in document cross-repository/component links | component/documentation kind/discussion lifecycle/rotten | Relative links in source documents are resolved to absolute and that will set their version to the same as the linking document.
Absolute links to other nodes in the same repository (that should be relative) that feature a specific version may do that on purpose (to pin to exact state) or as a result of bad practices.
Absolute links to another repository are normally using its master. As development progresses it may turn out that an older version of a linking document is pointing to a different, updated state of the master version of the linked document. This does not reflect consistently the state of the product for a particular version.
To resolve this, the older version of the document should be updated to link to a valid state of the linked document to keep the common information space consistent for that version. Or even better that should happen upon release to have all links to master versions changed with the respective component versions.
Another approach would be to manage the created bundles versioned in a repo too but that will not enable reproducible builds for a particular version of the whole product.
Managing this manually can be an overkill and should be aided by automation if that's what's necessary to happen. We need to further discuss the following options:
1. If the absolute link version doesn't match a node, then keep the original link
1. If the absolute link points a node, update absolute link with the version of the document representing the node (e.g. master -> 1.8.0).
1. Do not mandate an approach but make it configurable (either of the above) e.g. using semver format to instruct behavior per node or globally | 1.0 | Addressing versions in document cross-repository/component links - Relative links in source documents are resolved to absolute and that will set their version to the same as the linking document.
Absolute links to other nodes in the same repository (that should be relative) that feature a specific version may do that on purpose (to pin to exact state) or as a result of bad practices.
Absolute links to another repository are normally using its master. As development progresses it may turn out that an older version of a linking document is pointing to a different, updated state of the master version of the linked document. This does not reflect consistently the state of the product for a particular version.
To resolve this, the older version of the document should be updated to link to a valid state of the linked document to keep the common information space consistent for that version. Or even better that should happen upon release to have all links to master versions changed with the respective component versions.
Another approach would be to manage the created bundles versioned in a repo too but that will not enable reproducible builds for a particular version of the whole product.
Managing this manually can be an overkill and should be aided by automation if that's what's necessary to happen. We need to further discuss the following options:
1. If the absolute link version doesn't match a node, then keep the original link
1. If the absolute link points a node, update absolute link with the version of the document representing the node (e.g. master -> 1.8.0).
1. Do not mandate an approach but make it configurable (either of the above) e.g. using semver format to instruct behavior per node or globally | non_main | addressing versions in document cross repository component links relative links in source documents are resolved to absolute and that will set their version to the same as the linking document absolute links to other nodes in the same repository that should be relative that feature a specific version may do that on purpose to pin to exact state or as a result of bad practices absolute links to another repository are normally using its master as development progresses it may turn out that an older version of a linking document is pointing to a different updated state of the master version of the linked document this does not reflect consistently the state of the product for a particular version to resolve this the older version of the document should be updated to link to a valid state of the linked document to keep the common information space consistent for that version or even better that should happen upon release to have all links to master versions changed with the respective component versions another approach would be to manage the created bundles versioned in a repo too but that will not enable reproducible builds for a particular version of the whole product managing this manually can be an overkill and should be aided by automation if that s what s necessary to happen we need to further discuss the following options if the absolute link version doesn t match a node then keep the original link if the absolute link points a node update absolute link with the version of the document representing the node e g master do not mandate an approach but make it configurable either of the above e g using semver format to instruct behavior per node or globally | 0 |
94,884 | 10,860,587,006 | IssuesEvent | 2019-11-14 09:23:12 | PaffaLon/LabbFighterArena | https://api.github.com/repos/PaffaLon/LabbFighterArena | opened | Application Menu Documentation | documentation | The application will contain a few menus to allow the user to navigate in the application. The can navigate thoughe the menus by pressing the arrows keys on the keyabord and the enter key to prgress frome on part in the prgram to another. Frome a higher plane to a lower plane.
The application contains the followig menus.
**Mainmenu**
The _MainMenu_ (splash screen menu) is the applications primarey menu that first apears when the application launches.
_This menu contains the following_
- options: Play
- options: Scoreboard
- options: Combatlog
- options: Exit
**HeroMenu**
The _HerMenu_ is a 2nd layer menu derived from the play button from the main menu. The _HeroMenu_
_This menu contains the following:_
- options: New Hero
- options: Load Hero
- options: Exit
**CombatLog**
The combatlog menu displays the combat history of the most resent played game.
_This menu contains the following:_
- options: Exit
**NewHeroMenu**
Here the user can cosumize the the name of the new hero and randomly generat the remaning attribuets. The user also has to the option to either exit back to the pervious menu or to press play and begin the game.
_This menu contains the following:_
- options: Play
- options: Exit
- options: Edit
| 1.0 | Application Menu Documentation - The application will contain a few menus to allow the user to navigate in the application. The can navigate thoughe the menus by pressing the arrows keys on the keyabord and the enter key to prgress frome on part in the prgram to another. Frome a higher plane to a lower plane.
The application contains the followig menus.
**Mainmenu**
The _MainMenu_ (splash screen menu) is the applications primarey menu that first apears when the application launches.
_This menu contains the following_
- options: Play
- options: Scoreboard
- options: Combatlog
- options: Exit
**HeroMenu**
The _HerMenu_ is a 2nd layer menu derived from the play button from the main menu. The _HeroMenu_
_This menu contains the following:_
- options: New Hero
- options: Load Hero
- options: Exit
**CombatLog**
The combatlog menu displays the combat history of the most resent played game.
_This menu contains the following:_
- options: Exit
**NewHeroMenu**
Here the user can cosumize the the name of the new hero and randomly generat the remaning attribuets. The user also has to the option to either exit back to the pervious menu or to press play and begin the game.
_This menu contains the following:_
- options: Play
- options: Exit
- options: Edit
| non_main | application menu documentation the application will contain a few menus to allow the user to navigate in the application the can navigate thoughe the menus by pressing the arrows keys on the keyabord and the enter key to prgress frome on part in the prgram to another frome a higher plane to a lower plane the application contains the followig menus mainmenu the mainmenu splash screen menu is the applications primarey menu that first apears when the application launches this menu contains the following options play options scoreboard options combatlog options exit heromenu the hermenu is a layer menu derived from the play button from the main menu the heromenu this menu contains the following options new hero options load hero options exit combatlog the combatlog menu displays the combat history of the most resent played game this menu contains the following options exit newheromenu here the user can cosumize the the name of the new hero and randomly generat the remaning attribuets the user also has to the option to either exit back to the pervious menu or to press play and begin the game this menu contains the following options play options exit options edit | 0 |
1,998 | 6,715,450,768 | IssuesEvent | 2017-10-13 21:12:51 | dzavalishin/phantomuserland | https://api.github.com/repos/dzavalishin/phantomuserland | opened | optimize summon class | Component-Ph Component-VM enhancement Maintainability Performance | summon class instr must reference const pool slot
const pool slot will be inited with string ptr and is replaced by class ptr on first access
| True | optimize summon class - summon class instr must reference const pool slot
const pool slot will be inited with string ptr and is replaced by class ptr on first access
| main | optimize summon class summon class instr must reference const pool slot const pool slot will be inited with string ptr and is replaced by class ptr on first access | 1 |
533 | 3,931,810,469 | IssuesEvent | 2016-04-25 13:51:55 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Holiday: False trigger | Bug Maintainer Input Requested Triggering | This IA shouldn't be triggered on the following query:
[when is california primary 2016](https://duckduckgo.com/?q=when+is+california+primary+2016&ia=answer)
As [reported on Twitter](https://twitter.com/xarph/status/717064457227112448).
------
IA Page: http://duck.co/ia/view/holiday
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sekhavati | True | Holiday: False trigger - This IA shouldn't be triggered on the following query:
[when is california primary 2016](https://duckduckgo.com/?q=when+is+california+primary+2016&ia=answer)
As [reported on Twitter](https://twitter.com/xarph/status/717064457227112448).
------
IA Page: http://duck.co/ia/view/holiday
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sekhavati | main | holiday false trigger this ia shouldn t be triggered on the following query as ia page sekhavati | 1 |
41,625 | 6,924,196,065 | IssuesEvent | 2017-11-30 11:48:43 | mozilla/addons-server | https://api.github.com/repos/mozilla/addons-server | closed | Homepage property in user docs has an error | component: documentation qa: not needed triaged | The `homepage` prop at http://addons-server.readthedocs.io/en/latest/topics/api/accounts.html#get--api-v3-accounts-account-(int-user_id|string-username)- isn't quite formatted correctly. Fix incoming 😄 | 1.0 | Homepage property in user docs has an error - The `homepage` prop at http://addons-server.readthedocs.io/en/latest/topics/api/accounts.html#get--api-v3-accounts-account-(int-user_id|string-username)- isn't quite formatted correctly. Fix incoming 😄 | non_main | homepage property in user docs has an error the homepage prop at isn t quite formatted correctly fix incoming 😄 | 0 |
65,664 | 7,892,885,931 | IssuesEvent | 2018-06-28 16:14:05 | pydata/sparse | https://api.github.com/repos/pydata/sparse | opened | Move to a more robust unit testing model | design decision | I was thinking of moving the tests to a more robust unit test model. It'll take:
- The `dtype`, `shape` of each input array.
- The function name to test (both sparse and NumPy/SciPy) or callable.
- Where to place the arguments.
- Any additional arguments to the function.
It will:
- Generate the random arrays.
- Perform the operation on both arrays.
- Compare/assert.
This way, tests just reduce to "stub classes" instead of repetitions.
cc: @mrocklin | 1.0 | Move to a more robust unit testing model - I was thinking of moving the tests to a more robust unit test model. It'll take:
- The `dtype`, `shape` of each input array.
- The function name to test (both sparse and NumPy/SciPy) or callable.
- Where to place the arguments.
- Any additional arguments to the function.
It will:
- Generate the random arrays.
- Perform the operation on both arrays.
- Compare/assert.
This way, tests just reduce to "stub classes" instead of repetitions.
cc: @mrocklin | non_main | move to a more robust unit testing model i was thinking of moving the tests to a more robust unit test model it ll take the dtype shape of each input array the function name to test both sparse and numpy scipy or callable where to place the arguments any additional arguments to the function it will generate the random arrays perform the operation on both arrays compare assert this way tests just reduce to stub classes instead of repetitions cc mrocklin | 0 |
5,647 | 28,371,533,653 | IssuesEvent | 2023-04-12 17:21:26 | WiredForWar/machines | https://api.github.com/repos/WiredForWar/machines | opened | Remove the CD-ROM API | Good first issue Maintainance | Get rid of CD-related `MachGui` API:
- `machinesCDIsAvailable()`
- `getCDRomDriveContainingMachinesCD()`
- `getCDRomDriveContainingFile()`.
Remove `doesAtLeastOnePlayerHaveMachinesCD()` and make `receivedHasMachinesCDMessage()` an empty function. | True | Remove the CD-ROM API - Get rid of CD-related `MachGui` API:
- `machinesCDIsAvailable()`
- `getCDRomDriveContainingMachinesCD()`
- `getCDRomDriveContainingFile()`.
Remove `doesAtLeastOnePlayerHaveMachinesCD()` and make `receivedHasMachinesCDMessage()` an empty function. | main | remove the cd rom api get rid of cd related machgui api machinescdisavailable getcdromdrivecontainingmachinescd getcdromdrivecontainingfile remove doesatleastoneplayerhavemachinescd and make receivedhasmachinescdmessage an empty function | 1 |
1,324 | 5,672,332,779 | IssuesEvent | 2017-04-12 00:57:39 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Holiday: Give answers for Mother's Day, Father's Day, etc. | Maintainer Input Requested Status: PR Received Suggestion Triggering | Following a [suggestion on Twitter](https://twitter.com/daytonlowell/status/725342856852824066), it would be helpful if this triggered on things such as `when is mothers day`. This changes depending on country, however, so would probably need to incorporate locale detection as well.
---
IA Page: http://duck.co/ia/view/holiday
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sekhavati
| True | Holiday: Give answers for Mother's Day, Father's Day, etc. - Following a [suggestion on Twitter](https://twitter.com/daytonlowell/status/725342856852824066), it would be helpful if this triggered on things such as `when is mothers day`. This changes depending on country, however, so would probably need to incorporate locale detection as well.
---
IA Page: http://duck.co/ia/view/holiday
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sekhavati
| main | holiday give answers for mother s day father s day etc following a it would be helpful if this triggered on things such as when is mothers day this changes depending on country however so would probably need to incorporate locale detection as well ia page sekhavati | 1 |
3,981 | 18,344,603,645 | IssuesEvent | 2021-10-08 03:25:12 | pmqueiroz/mask-wizard | https://api.github.com/repos/pmqueiroz/mask-wizard | opened | Add linting and formatter | enhancement Maintainers Only | ### Preliminary checks
- [X] I've checked that there aren't [**other open issues**](https://github.com/pmqueiroz/mask-wizard/issues?q=is%3Aissue) on the same topic.
- [X] I want to work on this.
### Describe the problem requiring a solution
Create a code style guide and add set up linting and formatter to the project
### Describe the possible solution
Eslint and prettier
Add code style guid to Github Wiki
### Additional info
_No response_ | True | Add linting and formatter - ### Preliminary checks
- [X] I've checked that there aren't [**other open issues**](https://github.com/pmqueiroz/mask-wizard/issues?q=is%3Aissue) on the same topic.
- [X] I want to work on this.
### Describe the problem requiring a solution
Create a code style guide and add set up linting and formatter to the project
### Describe the possible solution
Eslint and prettier
Add code style guid to Github Wiki
### Additional info
_No response_ | main | add linting and formatter preliminary checks i ve checked that there aren t on the same topic i want to work on this describe the problem requiring a solution create a code style guide and add set up linting and formatter to the project describe the possible solution eslint and prettier add code style guid to github wiki additional info no response | 1 |
2,469 | 8,639,904,361 | IssuesEvent | 2018-11-23 22:33:44 | F5OEO/rpitx | https://api.github.com/repos/F5OEO/rpitx | closed | rpitx library | V1 related (not maintained) enhancement | Make a lib-rpitx for external software build.
Need to write exposed functions.
Requirement and proposals welcomed.
| True | rpitx library - Make a lib-rpitx for external software build.
Need to write exposed functions.
Requirement and proposals welcomed.
| main | rpitx library make a lib rpitx for external software build need to write exposed functions requirement and proposals welcomed | 1 |
136,876 | 5,289,653,970 | IssuesEvent | 2017-02-08 17:53:34 | anishathalye/gavel | https://api.github.com/repos/anishathalye/gavel | opened | Make the system more general for use cases beyond hackathons | enhancement low priority | Gavel has been [used](https://github.com/anishathalye/gavel/wiki/Users) mostly at hackathons. It would be neat to figure out other situations where such a system would be useful and then make Gavel more general so it can be used in more situations. | 1.0 | Make the system more general for use cases beyond hackathons - Gavel has been [used](https://github.com/anishathalye/gavel/wiki/Users) mostly at hackathons. It would be neat to figure out other situations where such a system would be useful and then make Gavel more general so it can be used in more situations. | non_main | make the system more general for use cases beyond hackathons gavel has been mostly at hackathons it would be neat to figure out other situations where such a system would be useful and then make gavel more general so it can be used in more situations | 0 |
4,293 | 21,657,145,836 | IssuesEvent | 2022-05-06 15:08:59 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Unable to set number_format to null for Money type | type: bug work: backend status: ready restricted: maintainers | ## Reproduce
1. Set up a Money column.
1. Submit a `PATCH` request to the columns API, e.g. `/api/db/v0/tables/15/columns/52/`
1. Send:
```json
{
"type": "MATHESAR_TYPES.MATHESAR_MONEY",
"display_options": {
"currency_symbol": "$",
"currency_symbol_location": "after-minus",
"number_format": "english"
}
}
```
Receive success. Good.
1. Now change `display_options.number_format` to `null`, sending:
```json
{
"type": "MATHESAR_TYPES.MATHESAR_MONEY",
"display_options": {
"currency_symbol": "$",
"currency_symbol_location": "after-minus",
"number_format": null
}
}
```
Expect success.
Receive:
```json
[
{
"code": 2024,
"field": "number_format",
"message": "This field may not be null.",
"detail": {}
}
]
```
| True | Unable to set number_format to null for Money type - ## Reproduce
1. Set up a Money column.
1. Submit a `PATCH` request to the columns API, e.g. `/api/db/v0/tables/15/columns/52/`
1. Send:
```json
{
"type": "MATHESAR_TYPES.MATHESAR_MONEY",
"display_options": {
"currency_symbol": "$",
"currency_symbol_location": "after-minus",
"number_format": "english"
}
}
```
Receive success. Good.
1. Now change `display_options.number_format` to `null`, sending:
```json
{
"type": "MATHESAR_TYPES.MATHESAR_MONEY",
"display_options": {
"currency_symbol": "$",
"currency_symbol_location": "after-minus",
"number_format": null
}
}
```
Expect success.
Receive:
```json
[
{
"code": 2024,
"field": "number_format",
"message": "This field may not be null.",
"detail": {}
}
]
```
| main | unable to set number format to null for money type reproduce set up a money column submit a patch request to the columns api e g api db tables columns send json type mathesar types mathesar money display options currency symbol currency symbol location after minus number format english receive success good now change display options number format to null sending json type mathesar types mathesar money display options currency symbol currency symbol location after minus number format null expect success receive json code field number format message this field may not be null detail | 1 |
685 | 4,231,990,991 | IssuesEvent | 2016-07-04 19:16:33 | Microsoft/DirectXMesh | https://api.github.com/repos/Microsoft/DirectXMesh | opened | Retire Windows 8.1 Store and Windows phone 8.1 projects | maintainence | At some point we should remove support for these older versions in favor of UWP apps
``DirectXMesh_Windows81.vcxproj``
``DirectXMesh_WindowsPhone81.vcxproj``
Please put any requests for continued support for one or more of these here. | True | Retire Windows 8.1 Store and Windows phone 8.1 projects - At some point we should remove support for these older versions in favor of UWP apps
``DirectXMesh_Windows81.vcxproj``
``DirectXMesh_WindowsPhone81.vcxproj``
Please put any requests for continued support for one or more of these here. | main | retire windows store and windows phone projects at some point we should remove support for these older versions in favor of uwp apps directxmesh vcxproj directxmesh vcxproj please put any requests for continued support for one or more of these here | 1 |
1,682 | 6,574,154,006 | IssuesEvent | 2017-09-11 11:43:54 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | EC2_ASG: Support NewInstancesProtectedFromScaleIn parameter | affects_2.3 aws cloud feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_asg
##### ANSIBLE VERSION
```
ansible 2.3.0
config file =
configured module search path = Default w/o overrides
```
##### SUMMARY
see http://boto3.readthedocs.io/en/latest/reference/services/autoscaling.html#AutoScaling.Client.create_auto_scaling_group
parameter NewInstancesProtectedFromScaleIn is currently unsupported
##### STEPS TO REPRODUCE
```
- ec2_asg:
name: myasg
launch_config_name: my_new_lc
health_check_period: 60
health_check_type: ELB
min_size: 5
max_size: 5
desired_capacity: 5
region: us-east-1
new_instances_protected_from_scale_in: true | false
```
##### EXPECTED RESULTS
param to be taken into account | True | EC2_ASG: Support NewInstancesProtectedFromScaleIn parameter - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ec2_asg
##### ANSIBLE VERSION
```
ansible 2.3.0
config file =
configured module search path = Default w/o overrides
```
##### SUMMARY
see http://boto3.readthedocs.io/en/latest/reference/services/autoscaling.html#AutoScaling.Client.create_auto_scaling_group
parameter NewInstancesProtectedFromScaleIn is currently unsupported
##### STEPS TO REPRODUCE
```
- ec2_asg:
name: myasg
launch_config_name: my_new_lc
health_check_period: 60
health_check_type: ELB
min_size: 5
max_size: 5
desired_capacity: 5
region: us-east-1
new_instances_protected_from_scale_in: true | false
```
##### EXPECTED RESULTS
param to be taken into account | main | asg support newinstancesprotectedfromscalein parameter issue type feature idea component name asg ansible version ansible config file configured module search path default w o overrides summary see parameter newinstancesprotectedfromscalein is currently unsupported steps to reproduce asg name myasg launch config name my new lc health check period health check type elb min size max size desired capacity region us east new instances protected from scale in true false expected results param to be taken into account | 1 |
94,436 | 15,962,374,141 | IssuesEvent | 2021-04-16 01:10:41 | KaterinaOrg/my-bag-of-holding | https://api.github.com/repos/KaterinaOrg/my-bag-of-holding | opened | CVE-2021-21353 (High) detected in pug-2.0.3.tgz, pug-code-gen-2.0.1.tgz | security vulnerability | ## CVE-2021-21353 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>pug-2.0.3.tgz</b>, <b>pug-code-gen-2.0.1.tgz</b></p></summary>
<p>
<details><summary><b>pug-2.0.3.tgz</b></p></summary>
<p>A clean, whitespace-sensitive template language for writing HTML</p>
<p>Library home page: <a href="https://registry.npmjs.org/pug/-/pug-2.0.3.tgz">https://registry.npmjs.org/pug/-/pug-2.0.3.tgz</a></p>
<p>
Dependency Hierarchy:
- grunt-contrib-pug-2.0.0.tgz (Root Library)
- :x: **pug-2.0.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>pug-code-gen-2.0.1.tgz</b></p></summary>
<p>Default code-generator for pug. It generates HTML via a JavaScript template function.</p>
<p>Library home page: <a href="https://registry.npmjs.org/pug-code-gen/-/pug-code-gen-2.0.1.tgz">https://registry.npmjs.org/pug-code-gen/-/pug-code-gen-2.0.1.tgz</a></p>
<p>
Dependency Hierarchy:
- grunt-contrib-pug-2.0.0.tgz (Root Library)
- pug-2.0.3.tgz
- :x: **pug-code-gen-2.0.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Pug is an npm package which is a high-performance template engine. In pug before version 3.0.1, if a remote attacker was able to control the `pretty` option of the pug compiler, e.g. if you spread a user provided object such as the query parameters of a request into the pug template inputs, it was possible for them to achieve remote code execution on the node.js backend. This is fixed in version 3.0.1. This advisory applies to multiple pug packages including "pug", "pug-code-gen". pug-code-gen has a backported fix at version 2.0.3. This advisory is not exploitable if there is no way for un-trusted input to be passed to pug as the `pretty` option, e.g. if you compile templates in advance before applying user input to them, you do not need to upgrade.
<p>Publish Date: 2021-03-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21353>CVE-2021-21353</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-p493-635q-r6gr">https://github.com/advisories/GHSA-p493-635q-r6gr</a></p>
<p>Release Date: 2020-12-23</p>
<p>Fix Resolution: pug -3.0.1, pug-code-gen-2.0.3, pug-code-gen-3.0.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"pug","packageVersion":"2.0.3","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"grunt-contrib-pug:2.0.0;pug:2.0.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"pug -3.0.1, pug-code-gen-2.0.3, pug-code-gen-3.0.2"},{"packageType":"javascript/Node.js","packageName":"pug-code-gen","packageVersion":"2.0.1","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"grunt-contrib-pug:2.0.0;pug:2.0.3;pug-code-gen:2.0.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"pug -3.0.1, pug-code-gen-2.0.3, pug-code-gen-3.0.2"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-21353","vulnerabilityDetails":"Pug is an npm package which is a high-performance template engine. In pug before version 3.0.1, if a remote attacker was able to control the `pretty` option of the pug compiler, e.g. if you spread a user provided object such as the query parameters of a request into the pug template inputs, it was possible for them to achieve remote code execution on the node.js backend. This is fixed in version 3.0.1. This advisory applies to multiple pug packages including \"pug\", \"pug-code-gen\". pug-code-gen has a backported fix at version 2.0.3. This advisory is not exploitable if there is no way for un-trusted input to be passed to pug as the `pretty` option, e.g. if you compile templates in advance before applying user input to them, you do not need to upgrade.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21353","cvss3Severity":"high","cvss3Score":"9.0","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-21353 (High) detected in pug-2.0.3.tgz, pug-code-gen-2.0.1.tgz - ## CVE-2021-21353 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>pug-2.0.3.tgz</b>, <b>pug-code-gen-2.0.1.tgz</b></p></summary>
<p>
<details><summary><b>pug-2.0.3.tgz</b></p></summary>
<p>A clean, whitespace-sensitive template language for writing HTML</p>
<p>Library home page: <a href="https://registry.npmjs.org/pug/-/pug-2.0.3.tgz">https://registry.npmjs.org/pug/-/pug-2.0.3.tgz</a></p>
<p>
Dependency Hierarchy:
- grunt-contrib-pug-2.0.0.tgz (Root Library)
- :x: **pug-2.0.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>pug-code-gen-2.0.1.tgz</b></p></summary>
<p>Default code-generator for pug. It generates HTML via a JavaScript template function.</p>
<p>Library home page: <a href="https://registry.npmjs.org/pug-code-gen/-/pug-code-gen-2.0.1.tgz">https://registry.npmjs.org/pug-code-gen/-/pug-code-gen-2.0.1.tgz</a></p>
<p>
Dependency Hierarchy:
- grunt-contrib-pug-2.0.0.tgz (Root Library)
- pug-2.0.3.tgz
- :x: **pug-code-gen-2.0.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Pug is an npm package which is a high-performance template engine. In pug before version 3.0.1, if a remote attacker was able to control the `pretty` option of the pug compiler, e.g. if you spread a user provided object such as the query parameters of a request into the pug template inputs, it was possible for them to achieve remote code execution on the node.js backend. This is fixed in version 3.0.1. This advisory applies to multiple pug packages including "pug", "pug-code-gen". pug-code-gen has a backported fix at version 2.0.3. This advisory is not exploitable if there is no way for un-trusted input to be passed to pug as the `pretty` option, e.g. if you compile templates in advance before applying user input to them, you do not need to upgrade.
<p>Publish Date: 2021-03-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21353>CVE-2021-21353</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-p493-635q-r6gr">https://github.com/advisories/GHSA-p493-635q-r6gr</a></p>
<p>Release Date: 2020-12-23</p>
<p>Fix Resolution: pug -3.0.1, pug-code-gen-2.0.3, pug-code-gen-3.0.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"pug","packageVersion":"2.0.3","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"grunt-contrib-pug:2.0.0;pug:2.0.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"pug -3.0.1, pug-code-gen-2.0.3, pug-code-gen-3.0.2"},{"packageType":"javascript/Node.js","packageName":"pug-code-gen","packageVersion":"2.0.1","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"grunt-contrib-pug:2.0.0;pug:2.0.3;pug-code-gen:2.0.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"pug -3.0.1, pug-code-gen-2.0.3, pug-code-gen-3.0.2"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-21353","vulnerabilityDetails":"Pug is an npm package which is a high-performance template engine. In pug before version 3.0.1, if a remote attacker was able to control the `pretty` option of the pug compiler, e.g. if you spread a user provided object such as the query parameters of a request into the pug template inputs, it was possible for them to achieve remote code execution on the node.js backend. This is fixed in version 3.0.1. This advisory applies to multiple pug packages including \"pug\", \"pug-code-gen\". pug-code-gen has a backported fix at version 2.0.3. This advisory is not exploitable if there is no way for un-trusted input to be passed to pug as the `pretty` option, e.g. if you compile templates in advance before applying user input to them, you do not need to upgrade.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21353","cvss3Severity":"high","cvss3Score":"9.0","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_main | cve high detected in pug tgz pug code gen tgz cve high severity vulnerability vulnerable libraries pug tgz pug code gen tgz pug tgz a clean whitespace sensitive template language for writing html library home page a href dependency hierarchy grunt contrib pug tgz root library x pug tgz vulnerable library pug code gen tgz default code generator for pug it generates html via a javascript template function library home page a href dependency hierarchy grunt contrib pug tgz root library pug tgz x pug code gen tgz vulnerable library found in base branch main vulnerability details pug is an npm package which is a high performance template engine in pug before version if a remote attacker was able to control the pretty option of the pug compiler e g if you spread a user provided object such as the query parameters of a request into the pug template inputs it was possible for them to achieve remote code execution on the node js backend this is fixed in version this advisory applies to multiple pug packages including pug pug code gen pug code gen has a backported fix at version this advisory is not exploitable if there is no way for un trusted input to be passed to pug as the pretty option e g if you compile templates in advance before applying user input to them you do not need to upgrade publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution pug pug code gen pug code gen isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt contrib pug pug isminimumfixversionavailable true minimumfixversion pug pug code gen pug code gen packagetype javascript node js packagename pug code gen packageversion packagefilepaths istransitivedependency true dependencytree grunt contrib pug pug pug code gen isminimumfixversionavailable true minimumfixversion pug pug code gen pug code gen basebranches vulnerabilityidentifier cve vulnerabilitydetails pug is an npm package which is a high performance template engine in pug before version if a remote attacker was able to control the pretty option of the pug compiler e g if you spread a user provided object such as the query parameters of a request into the pug template inputs it was possible for them to achieve remote code execution on the node js backend this is fixed in version this advisory applies to multiple pug packages including pug pug code gen pug code gen has a backported fix at version this advisory is not exploitable if there is no way for un trusted input to be passed to pug as the pretty option e g if you compile templates in advance before applying user input to them you do not need to upgrade vulnerabilityurl | 0 |
324,636 | 27,812,369,183 | IssuesEvent | 2023-03-18 09:17:46 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: schemachange/during/kv failed | C-test-failure O-robot O-roachtest branch-master release-blocker | roachtest.schemachange/during/kv [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9121674?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9121674?buildTab=artifacts#/schemachange/during/kv) on master @ [6c99966f604f3521acdb925b9f689529ffd46df3](https://github.com/cockroachdb/cockroach/commits/6c99966f604f3521acdb925b9f689529ffd46df3):
```
test artifacts and logs in: /artifacts/schemachange/during/kv/run_1
(schemachange.go:49).1: pq: the backup is from a version older than our minimum restoreable version 22.2
(monitor.go:127).Wait: monitor failure: monitor task failed: t.Fatal() was called
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-schema
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*schemachange/during/kv.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: schemachange/during/kv failed - roachtest.schemachange/during/kv [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9121674?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9121674?buildTab=artifacts#/schemachange/during/kv) on master @ [6c99966f604f3521acdb925b9f689529ffd46df3](https://github.com/cockroachdb/cockroach/commits/6c99966f604f3521acdb925b9f689529ffd46df3):
```
test artifacts and logs in: /artifacts/schemachange/during/kv/run_1
(schemachange.go:49).1: pq: the backup is from a version older than our minimum restoreable version 22.2
(monitor.go:127).Wait: monitor failure: monitor task failed: t.Fatal() was called
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-schema
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*schemachange/during/kv.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_main | roachtest schemachange during kv failed roachtest schemachange during kv with on master test artifacts and logs in artifacts schemachange during kv run schemachange go pq the backup is from a version older than our minimum restoreable version monitor go wait monitor failure monitor task failed t fatal was called parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest ssd help see see cc cockroachdb sql schema | 0 |
1,596 | 6,572,379,717 | IssuesEvent | 2017-09-11 01:51:50 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | datadog_monitor is not idempotent | affects_2.1 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
datadog_monitor
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
```
##### SUMMARY
datadog_monitor makes changes every time. This is especially annoying as datadog has a way to notify folks when a monitor changes
##### STEPS TO REPRODUCE
Make any call to datadog_monitor twice
##### EXPECTED RESULTS
OK
##### ACTUAL RESULTS
CHANGED
##### NOTES
I've started playing with code to fix this, though having issues with how silenced is passed around as a nonetype where as the api returns an empty dict.
Rough code for now:
```
monitor = _get_monitor(module)
if not monitor:
_post_monitor(module, options)
else:
# If the query, name or message differ update the monitor
if module.params['name'] != monitor['name'] or module.params['query'] != monitor['query'] or module.params['message'].rstrip() != monitor['message']:
_update_monitor(module, monitor, options)
# If any options differ update the monitor
for option in options:
if monitor['options'][option] != options[option]:
_update_monitor(module, monitor, options)
# No changes to an attributes
module.exit_json(changed=False)
```
| True | datadog_monitor is not idempotent - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
datadog_monitor
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
```
##### SUMMARY
datadog_monitor makes changes every time. This is especially annoying as datadog has a way to notify folks when a monitor changes
##### STEPS TO REPRODUCE
Make any call to datadog_monitor twice
##### EXPECTED RESULTS
OK
##### ACTUAL RESULTS
CHANGED
##### NOTES
I've started playing with code to fix this, though having issues with how silenced is passed around as a nonetype where as the api returns an empty dict.
Rough code for now:
```
monitor = _get_monitor(module)
if not monitor:
_post_monitor(module, options)
else:
# If the query, name or message differ update the monitor
if module.params['name'] != monitor['name'] or module.params['query'] != monitor['query'] or module.params['message'].rstrip() != monitor['message']:
_update_monitor(module, monitor, options)
# If any options differ update the monitor
for option in options:
if monitor['options'][option] != options[option]:
_update_monitor(module, monitor, options)
# No changes to an attributes
module.exit_json(changed=False)
```
| main | datadog monitor is not idempotent issue type bug report component name datadog monitor ansible version ansible summary datadog monitor makes changes every time this is especially annoying as datadog has a way to notify folks when a monitor changes steps to reproduce make any call to datadog monitor twice expected results ok actual results changed notes i ve started playing with code to fix this though having issues with how silenced is passed around as a nonetype where as the api returns an empty dict rough code for now monitor get monitor module if not monitor post monitor module options else if the query name or message differ update the monitor if module params monitor or module params monitor or module params rstrip monitor update monitor module monitor options if any options differ update the monitor for option in options if monitor options update monitor module monitor options no changes to an attributes module exit json changed false | 1 |
4,620 | 23,924,274,786 | IssuesEvent | 2022-09-09 20:22:44 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | reopened | Bug: Wrong handler mapping after esbuild | blocked/more-info-needed maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
When running esbuild on a Lambda function, the output doesn't mantain the same folder structure as the enry points, and this remains unchange in the built template, resulting in SAM not finding the lambda Handler:
### Steps to reproduce:
Project structure:
```
- package.json
- template.yaml
-- src
--- functions
---- FunctionName
----- app.js
--- utils
---- some-util.js
```
Template.yaml:
```
FunctionName:
Type: AWS::Serverless::Function
Metadata:
BuildMethod: esbuild
Properties:
Handler: src/functions/FunctionName/app.Handler
Events:
HttpEvent:
....
```
### Observed result:
code is budled to FunctionName folder in .aws-build resulting in:
```
- .aws-build
-- build
--- template.yaml
--- FunctionName
---- app.js
```
However, the built template results in:
```
FunctionName:
Type: AWS::Serverless::Function
Metadata:
BuildMethod: esbuild
Properties:
CodeUri: FunctionName
Handler: src/functions/FunctionName/app.Handler
Events:
HttpEvent:
....
```
Resulting in an error since FunctionName/src/functions/FunctionName/app.Handler
### Expected result:
Handler in built template should match esbuild output. I.E:
```
FunctionName:
Type: AWS::Serverless::Function
Metadata:
BuildMethod: esbuild
Properties:
CodeUri: FunctionName
Handler: app.Handler
Events:
HttpEvent:
....
```
| True | Bug: Wrong handler mapping after esbuild - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
When running esbuild on a Lambda function, the output doesn't mantain the same folder structure as the enry points, and this remains unchange in the built template, resulting in SAM not finding the lambda Handler:
### Steps to reproduce:
Project structure:
```
- package.json
- template.yaml
-- src
--- functions
---- FunctionName
----- app.js
--- utils
---- some-util.js
```
Template.yaml:
```
FunctionName:
Type: AWS::Serverless::Function
Metadata:
BuildMethod: esbuild
Properties:
Handler: src/functions/FunctionName/app.Handler
Events:
HttpEvent:
....
```
### Observed result:
code is budled to FunctionName folder in .aws-build resulting in:
```
- .aws-build
-- build
--- template.yaml
--- FunctionName
---- app.js
```
However, the built template results in:
```
FunctionName:
Type: AWS::Serverless::Function
Metadata:
BuildMethod: esbuild
Properties:
CodeUri: FunctionName
Handler: src/functions/FunctionName/app.Handler
Events:
HttpEvent:
....
```
Resulting in an error since FunctionName/src/functions/FunctionName/app.Handler
### Expected result:
Handler in built template should match esbuild output. I.E:
```
FunctionName:
Type: AWS::Serverless::Function
Metadata:
BuildMethod: esbuild
Properties:
CodeUri: FunctionName
Handler: app.Handler
Events:
HttpEvent:
....
```
| main | bug wrong handler mapping after esbuild make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description when running esbuild on a lambda function the output doesn t mantain the same folder structure as the enry points and this remains unchange in the built template resulting in sam not finding the lambda handler steps to reproduce project structure package json template yaml src functions functionname app js utils some util js template yaml functionname type aws serverless function metadata buildmethod esbuild properties handler src functions functionname app handler events httpevent observed result code is budled to functionname folder in aws build resulting in aws build build template yaml functionname app js however the built template results in functionname type aws serverless function metadata buildmethod esbuild properties codeuri functionname handler src functions functionname app handler events httpevent resulting in an error since functionname src functions functionname app handler expected result handler in built template should match esbuild output i e functionname type aws serverless function metadata buildmethod esbuild properties codeuri functionname handler app handler events httpevent | 1 |
503,313 | 14,589,146,795 | IssuesEvent | 2020-12-19 00:41:18 | Jwinter-Jones/explore-california | https://api.github.com/repos/Jwinter-Jones/explore-california | opened | Add Links to Monthly Specials | Priority 2 Severity 1 feature request | ###_ISSUE TEMPLATE: Use for bugs, tasks, and feature requests. Label accordingly, using the labels on the right.
###_Issue Type - is this a bug, task, or feature request?: Feature Request
###_Severity (1-4; 1 - not severe, 4 - very severe): 1
###_Priority (1-4; 1 - lowest priority, 4 - highest priority): 2
###_Synopsis - describe bug/task/feature request, in more detail: Add links to monthly specials
###_Expected Behavior - what the behavior is supposed to be, for bug/task/FR: When a monthly special name is clicked, you will see a description and place to purchase the special
###_Steps to reproduce/view issue: Open the website and you won't see the monthly specials described or purchasable
###_Other Notes: I would like to just click the name of the special and be brought to a webpage so I can read about and purchase the special
| 1.0 | Add Links to Monthly Specials - ###_ISSUE TEMPLATE: Use for bugs, tasks, and feature requests. Label accordingly, using the labels on the right.
###_Issue Type - is this a bug, task, or feature request?: Feature Request
###_Severity (1-4; 1 - not severe, 4 - very severe): 1
###_Priority (1-4; 1 - lowest priority, 4 - highest priority): 2
###_Synopsis - describe bug/task/feature request, in more detail: Add links to monthly specials
###_Expected Behavior - what the behavior is supposed to be, for bug/task/FR: When a monthly special name is clicked, you will see a description and place to purchase the special
###_Steps to reproduce/view issue: Open the website and you won't see the monthly specials described or purchasable
###_Other Notes: I would like to just click the name of the special and be brought to a webpage so I can read about and purchase the special
| non_main | add links to monthly specials issue template use for bugs tasks and feature requests label accordingly using the labels on the right issue type is this a bug task or feature request feature request severity not severe very severe priority lowest priority highest priority synopsis describe bug task feature request in more detail add links to monthly specials expected behavior what the behavior is supposed to be for bug task fr when a monthly special name is clicked you will see a description and place to purchase the special steps to reproduce view issue open the website and you won t see the monthly specials described or purchasable other notes i would like to just click the name of the special and be brought to a webpage so i can read about and purchase the special | 0 |
588,081 | 17,646,673,097 | IssuesEvent | 2021-08-20 07:19:57 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | Unable to join any multiplayer lobby on Android | priority:0 area:multiplayer platform:android type:reliability | **Describe the bug:**
Attempting to join any multiplayer lobby on Android throws an error. On the host's side, they will momentarily see the player joining but then suddenly leaves the lobby.
**osu!lazer version:**
2021.815.0-lazer
**Logs:**
[network.log](https://github.com/ppy/osu/files/6988290/network.log)
[runtime.log](https://github.com/ppy/osu/files/6988291/runtime.log)
| 1.0 | Unable to join any multiplayer lobby on Android - **Describe the bug:**
Attempting to join any multiplayer lobby on Android throws an error. On the host's side, they will momentarily see the player joining but then suddenly leaves the lobby.
**osu!lazer version:**
2021.815.0-lazer
**Logs:**
[network.log](https://github.com/ppy/osu/files/6988290/network.log)
[runtime.log](https://github.com/ppy/osu/files/6988291/runtime.log)
| non_main | unable to join any multiplayer lobby on android describe the bug attempting to join any multiplayer lobby on android throws an error on the host s side they will momentarily see the player joining but then suddenly leaves the lobby osu lazer version lazer logs | 0 |
525 | 3,925,269,808 | IssuesEvent | 2016-04-22 18:18:52 | StefMa/TimeTracking | https://api.github.com/repos/StefMa/TimeTracking | closed | Create README for this project | MAINTAINING | Explain what it is. What they can do and where they can get help. | True | Create README for this project - Explain what it is. What they can do and where they can get help. | main | create readme for this project explain what it is what they can do and where they can get help | 1 |
116,440 | 24,918,341,270 | IssuesEvent | 2022-10-30 17:15:02 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Folding LoadAlignedVector* into the consumer instructions with VEX-encoding | enhancement area-CodeGen-coreclr JitUntriaged | Currently, `LoadVector128/256` can be folded into its consumer instructions with VEX-encoding but `LoadAlignedVector128/256` not.
`LoadAlignedVector128/256` would throw hardware exceptions if the memory address is not aligned to the specific boundary, but other VEX-encoded instructions (e.g., `vaddps xmm0, xmm1, [unalignedAddr]`) can work with unaligned memory. So, actually, we can fold `LoadAlignedVector128/256` into its consumer instructions with VEX-encoding.
```asm
;;; unoptimized
vmovaps xmm0, [unalignedAddr] ;;; hardware exception
vaddps xmm0, xmm1, xmm0
;;; optimized
vaddps xmm0, xmm1, [unalignedAddr] ;;; ok
```
All the mainstream C/C++ compilers have this behavior.
@CarolEidt @tannergooding @mikedn
category:cq
theme:vector-codegen
skill-level:intermediate
cost:medium | 1.0 | Folding LoadAlignedVector* into the consumer instructions with VEX-encoding - Currently, `LoadVector128/256` can be folded into its consumer instructions with VEX-encoding but `LoadAlignedVector128/256` not.
`LoadAlignedVector128/256` would throw hardware exceptions if the memory address is not aligned to the specific boundary, but other VEX-encoded instructions (e.g., `vaddps xmm0, xmm1, [unalignedAddr]`) can work with unaligned memory. So, actually, we can fold `LoadAlignedVector128/256` into its consumer instructions with VEX-encoding.
```asm
;;; unoptimized
vmovaps xmm0, [unalignedAddr] ;;; hardware exception
vaddps xmm0, xmm1, xmm0
;;; optimized
vaddps xmm0, xmm1, [unalignedAddr] ;;; ok
```
All the mainstream C/C++ compilers have this behavior.
@CarolEidt @tannergooding @mikedn
category:cq
theme:vector-codegen
skill-level:intermediate
cost:medium | non_main | folding loadalignedvector into the consumer instructions with vex encoding currently can be folded into its consumer instructions with vex encoding but not would throw hardware exceptions if the memory address is not aligned to the specific boundary but other vex encoded instructions e g vaddps can work with unaligned memory so actually we can fold into its consumer instructions with vex encoding asm unoptimized vmovaps hardware exception vaddps optimized vaddps ok all the mainstream c c compilers have this behavior caroleidt tannergooding mikedn category cq theme vector codegen skill level intermediate cost medium | 0 |
1,209 | 5,165,219,952 | IssuesEvent | 2017-01-17 13:04:56 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2 module does not create root volume as documented | affects_2.1 aws bug_report cloud waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2 module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
```
##### OS / ENVIRONMENT
ubuntu 14.04
##### SUMMARY
Launching an EBS backed ec2 instance with a non-default root volume does not work as shown in the documentation example. Instead it launches an instance with a standard 8GB root volume, and an additional volume per the definition.
The following is the task which was used to launch the instance (some keys removed for brevity):
```
- name: Launch ec2 instance
ec2:
instance_type: "t2.small"
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 15
register: ec2
```
This is the example from http://docs.ansible.com/ansible/ec2_module.html, which shows a non-standard root volume:
```
# Single instance with ssd gp2 root volume
- ec2:
key_name: mykey
group: webserver
instance_type: c3.medium
image: ami-123456
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 8
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
exact_count: 1
```
| True | ec2 module does not create root volume as documented - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ec2 module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
```
##### OS / ENVIRONMENT
ubuntu 14.04
##### SUMMARY
Launching an EBS backed ec2 instance with a non-default root volume does not work as shown in the documentation example. Instead it launches an instance with a standard 8GB root volume, and an additional volume per the definition.
The following is the task which was used to launch the instance (some keys removed for brevity):
```
- name: Launch ec2 instance
ec2:
instance_type: "t2.small"
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 15
register: ec2
```
This is the example from http://docs.ansible.com/ansible/ec2_module.html, which shows a non-standard root volume:
```
# Single instance with ssd gp2 root volume
- ec2:
key_name: mykey
group: webserver
instance_type: c3.medium
image: ami-123456
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 8
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
exact_count: 1
```
| main | module does not create root volume as documented issue type bug report component name module ansible version ansible os environment ubuntu summary launching an ebs backed instance with a non default root volume does not work as shown in the documentation example instead it launches an instance with a standard root volume and an additional volume per the definition the following is the task which was used to launch the instance some keys removed for brevity name launch instance instance type small volumes device name dev xvda volume type volume size register this is the example from which shows a non standard root volume single instance with ssd root volume key name mykey group webserver instance type medium image ami wait yes wait timeout volumes device name dev xvda volume type volume size vpc subnet id subnet assign public ip yes exact count | 1 |
74,134 | 9,754,985,225 | IssuesEvent | 2019-06-04 12:58:44 | pyinstaller/pyinstaller | https://api.github.com/repos/pyinstaller/pyinstaller | closed | Clarify documentation on __file__ | documentation pull-request wanted | The section "Run-time Information" of the documentation makes it sound like `__file__` does not work at all when running a frozen Python program:
> For example, you might have data files that are normally found based on a module’s `__file__` attribute. That will not work when the code is bundled.
That doesn't seem to be true though - PyInstaller synthesizes a `__file__` for frozen apps that should "just work" in most cases (see https://github.com/pyinstaller/pyinstaller/issues/1598#issuecomment-147753507). For the babel Python package for example it worked (in conjunction with the hook) quite well and an attempt to introduce PyInstaller support actually broke it (https://github.com/python-babel/babel/issues/529).
I think the main thing to point out here is that if one copies data files to the root of the bundled data directory (`datas=[('/path/to/mypackage/data/', 'data')]`) it won't work - one has to use `datas=[('/path/to/mypackage/data/', 'mypackage/data')]`. | 1.0 | Clarify documentation on __file__ - The section "Run-time Information" of the documentation makes it sound like `__file__` does not work at all when running a frozen Python program:
> For example, you might have data files that are normally found based on a module’s `__file__` attribute. That will not work when the code is bundled.
That doesn't seem to be true though - PyInstaller synthesizes a `__file__` for frozen apps that should "just work" in most cases (see https://github.com/pyinstaller/pyinstaller/issues/1598#issuecomment-147753507). For the babel Python package for example it worked (in conjunction with the hook) quite well and an attempt to introduce PyInstaller support actually broke it (https://github.com/python-babel/babel/issues/529).
I think the main thing to point out here is that if one copies data files to the root of the bundled data directory (`datas=[('/path/to/mypackage/data/', 'data')]`) it won't work - one has to use `datas=[('/path/to/mypackage/data/', 'mypackage/data')]`. | non_main | clarify documentation on file the section run time information of the documentation makes it sound like file does not work at all when running a frozen python program for example you might have data files that are normally found based on a module’s file attribute that will not work when the code is bundled that doesn t seem to be true though pyinstaller synthesizes a file for frozen apps that should just work in most cases see for the babel python package for example it worked in conjunction with the hook quite well and an attempt to introduce pyinstaller support actually broke it i think the main thing to point out here is that if one copies data files to the root of the bundled data directory datas it won t work one has to use datas | 0 |
1,136 | 4,998,872,107 | IssuesEvent | 2016-12-09 21:18:45 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_elb_lb: Ansible didn't detect "scheme" change | affects_2.1 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2_elb_lb
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
config file = /Users/rabe/Documents/Projects/IoT/Design/Devel/Env-Setup/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Changed the scheme from `internal` to `internet-facing`, and Ansible didn't detect that it had to change the state, i. e. perform some work.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Originally had the following:
```
- name: Create Elastic Load Balancer for frontend
ec2_elb_lb:
name: "{{ owner }}-elb-{{ env }}-fe"
region: "{{ region }}"
subnets:
- "{{ mgmt_subnet_fe_1 }}"
- "{{ mgmt_subnet_fe_2 }}"
state: present
scheme: internal
cross_az_load_balancing: yes
listeners:
- protocol: http
instance_protocol: http
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: http # options are http, https, ssl, tcp
ping_port: 80
ping_path: "/index.html" # not required for tcp or ssl
response_timeout: 5 # seconds
interval: 30 # seconds
unhealthy_threshold: 2
healthy_threshold: 10
instance_ids:
- "{{ mgmt_i_fe_1.tagged_instances[0].id }}"
- "{{ mgmt_i_fe_2.tagged_instances[0].id }}"
tags:
Name: "{{ owner }}_elb_{{ env }}_fe"
Env: "{{ owner }}_{{ env }}"
Tier: "{{ owner }}_{{ env }}_frontend"
```
Changed the above "scheme" line as follows:
```
scheme: internet-facing
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I expected that Ansible noticed it had to change the ELB config.
##### ACTUAL RESULTS
Ansible did _not_ notice it got work to do. I had to manually remove the ELB and re-run the playbook.
| True | ec2_elb_lb: Ansible didn't detect "scheme" change - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2_elb_lb
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
config file = /Users/rabe/Documents/Projects/IoT/Design/Devel/Env-Setup/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Changed the scheme from `internal` to `internet-facing`, and Ansible didn't detect that it had to change the state, i. e. perform some work.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Originally had the following:
```
- name: Create Elastic Load Balancer for frontend
ec2_elb_lb:
name: "{{ owner }}-elb-{{ env }}-fe"
region: "{{ region }}"
subnets:
- "{{ mgmt_subnet_fe_1 }}"
- "{{ mgmt_subnet_fe_2 }}"
state: present
scheme: internal
cross_az_load_balancing: yes
listeners:
- protocol: http
instance_protocol: http
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: http # options are http, https, ssl, tcp
ping_port: 80
ping_path: "/index.html" # not required for tcp or ssl
response_timeout: 5 # seconds
interval: 30 # seconds
unhealthy_threshold: 2
healthy_threshold: 10
instance_ids:
- "{{ mgmt_i_fe_1.tagged_instances[0].id }}"
- "{{ mgmt_i_fe_2.tagged_instances[0].id }}"
tags:
Name: "{{ owner }}_elb_{{ env }}_fe"
Env: "{{ owner }}_{{ env }}"
Tier: "{{ owner }}_{{ env }}_frontend"
```
Changed the above "scheme" line as follows:
```
scheme: internet-facing
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
I expected that Ansible noticed it had to change the ELB config.
##### ACTUAL RESULTS
Ansible did _not_ notice it got work to do. I had to manually remove the ELB and re-run the playbook.
| main | elb lb ansible didn t detect scheme change issue type bug report component name elb lb ansible version ansible config file users rabe documents projects iot design devel env setup ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment n a summary changed the scheme from internal to internet facing and ansible didn t detect that it had to change the state i e perform some work steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used originally had the following name create elastic load balancer for frontend elb lb name owner elb env fe region region subnets mgmt subnet fe mgmt subnet fe state present scheme internal cross az load balancing yes listeners protocol http instance protocol http load balancer port instance port health check ping protocol http options are http https ssl tcp ping port ping path index html not required for tcp or ssl response timeout seconds interval seconds unhealthy threshold healthy threshold instance ids mgmt i fe tagged instances id mgmt i fe tagged instances id tags name owner elb env fe env owner env tier owner env frontend changed the above scheme line as follows scheme internet facing expected results i expected that ansible noticed it had to change the elb config actual results ansible did not notice it got work to do i had to manually remove the elb and re run the playbook | 1 |
3,417 | 13,182,090,493 | IssuesEvent | 2020-08-12 15:15:07 | duo-labs/cloudmapper | https://api.github.com/repos/duo-labs/cloudmapper | closed | Feature: Per-VPC/Subnet Network Connections (Arrows) | map unmaintained_functionality | Currently network connections are displayed per-host: when a host (e.g. an EC2 instance) can communicate with another host (e.g. through a permissive SG rule), then an arrow goes from one host to the other.
The issue with this is that for accounts with a lot of hosts, we end up with hundreds of connections/arrows, often due to *one* SG rule (i.e. if you have 2 VPCs with 1 subnet each and each of these subnets has hundreds of EC2 instances, if a SG rule allows one subnet to access the other then each pair of EC2 instances will have an arrow).
Would it be possible instead that the arrow be between the VPC/subnet "square" instead of each host? This would:
- Better represent the network configuration
- Ensure that the graphs are readable. | True | Feature: Per-VPC/Subnet Network Connections (Arrows) - Currently network connections are displayed per-host: when a host (e.g. an EC2 instance) can communicate with another host (e.g. through a permissive SG rule), then an arrow goes from one host to the other.
The issue with this is that for accounts with a lot of hosts, we end up with hundreds of connections/arrows, often due to *one* SG rule (i.e. if you have 2 VPCs with 1 subnet each and each of these subnets has hundreds of EC2 instances, if a SG rule allows one subnet to access the other then each pair of EC2 instances will have an arrow).
Would it be possible instead that the arrow be between the VPC/subnet "square" instead of each host? This would:
- Better represent the network configuration
- Ensure that the graphs are readable. | main | feature per vpc subnet network connections arrows currently network connections are displayed per host when a host e g an instance can communicate with another host e g through a permissive sg rule then an arrow goes from one host to the other the issue with this is that for accounts with a lot of hosts we end up with hundreds of connections arrows often due to one sg rule i e if you have vpcs with subnet each and each of these subnets has hundreds of instances if a sg rule allows one subnet to access the other then each pair of instances will have an arrow would it be possible instead that the arrow be between the vpc subnet square instead of each host this would better represent the network configuration ensure that the graphs are readable | 1 |
417,115 | 12,155,912,626 | IssuesEvent | 2020-04-25 15:09:49 | Scifabric/pybossa | https://api.github.com/repos/Scifabric/pybossa | closed | Error in rebuilding the database | priority.medium | Rebuilding the database produces an error as it cannot drop tables which have some foreign key relation in some other table.
Instead of using DROP TABLE, DROP CASCADE should be used.
`python cli.py db_rebuild` produces this error. | 1.0 | Error in rebuilding the database - Rebuilding the database produces an error as it cannot drop tables which have some foreign key relation in some other table.
Instead of using DROP TABLE, DROP CASCADE should be used.
`python cli.py db_rebuild` produces this error. | non_main | error in rebuilding the database rebuilding the database produces an error as it cannot drop tables which have some foreign key relation in some other table instead of using drop table drop cascade should be used python cli py db rebuild produces this error | 0 |
1,641 | 6,572,667,260 | IssuesEvent | 2017-09-11 04:14:18 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | npm "mongodb" package error | affects_2.2 bug_report waiting_on_maintainer | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
npm
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Installing "mongodb" package with npm module failed with module error
```
ValueError: need more than 1 value to unpack
```
##### STEPS TO REPRODUCE
```
ansible localhost -m npm -a "name=mongodb global=yes executable=/usr/bin/npm state=latest"
```
##### ACTUAL RESULTS
```
Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/extras/packaging/language/npm.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990 `" && echo ansible-tmp-1479199216.92-116938239050990="` echo $HOME/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpcvkNFw TO /root/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990/npm.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990/ /root/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990/npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990/npm.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_BPzD5_/ansible_module_npm.py", line 271, in <module>
main()
File "/tmp/ansible_BPzD5_/ansible_module_npm.py", line 254, in main
outdated = npm.list_outdated()
File "/tmp/ansible_BPzD5_/ansible_module_npm.py", line 205, in list_outdated
pkg, other = re.split('\s|@', dep, 1)
ValueError: need more than 1 value to unpack
localhost | FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "npm"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_BPzD5_/ansible_module_npm.py\", line 271, in <module>\n main()\n File \"/tmp/ansible_BPzD5_/ansible_module_npm.py\", line 254, in main\n outdated = npm.list_outdated()\n File \"/tmp/ansible_BPzD5_/ansible_module_npm.py\", line 205, in list_outdated\n pkg, other = re.split('\\s|@', dep, 1)\nValueError: need more than 1 value to unpack\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
```
| True | npm "mongodb" package error - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
npm
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
Installing "mongodb" package with npm module failed with module error
```
ValueError: need more than 1 value to unpack
```
##### STEPS TO REPRODUCE
```
ansible localhost -m npm -a "name=mongodb global=yes executable=/usr/bin/npm state=latest"
```
##### ACTUAL RESULTS
```
Loading callback plugin minimal of type stdout, v2.0 from /usr/local/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/extras/packaging/language/npm.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990 `" && echo ansible-tmp-1479199216.92-116938239050990="` echo $HOME/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpcvkNFw TO /root/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990/npm.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990/ /root/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990/npm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990/npm.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1479199216.92-116938239050990/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_BPzD5_/ansible_module_npm.py", line 271, in <module>
main()
File "/tmp/ansible_BPzD5_/ansible_module_npm.py", line 254, in main
outdated = npm.list_outdated()
File "/tmp/ansible_BPzD5_/ansible_module_npm.py", line 205, in list_outdated
pkg, other = re.split('\s|@', dep, 1)
ValueError: need more than 1 value to unpack
localhost | FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "npm"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_BPzD5_/ansible_module_npm.py\", line 271, in <module>\n main()\n File \"/tmp/ansible_BPzD5_/ansible_module_npm.py\", line 254, in main\n outdated = npm.list_outdated()\n File \"/tmp/ansible_BPzD5_/ansible_module_npm.py\", line 205, in list_outdated\n pkg, other = re.split('\\s|@', dep, 1)\nValueError: need more than 1 value to unpack\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
```
| main | npm mongodb package error issue type bug report component name npm ansible version ansible config file configured module search path default w o overrides os environment n a summary installing mongodb package with npm module failed with module error valueerror need more than value to unpack steps to reproduce ansible localhost m npm a name mongodb global yes executable usr bin npm state latest actual results loading callback plugin minimal of type stdout from usr local lib dist packages ansible plugins callback init pyc using module file usr local lib dist packages ansible modules extras packaging language npm py establish local connection for user root exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpcvknfw to root ansible tmp ansible tmp npm py exec bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp npm py sleep exec bin sh c usr bin python root ansible tmp ansible tmp npm py rm rf root ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible ansible module npm py line in main file tmp ansible ansible module npm py line in main outdated npm list outdated file tmp ansible ansible module npm py line in list outdated pkg other re split s dep valueerror need more than value to unpack localhost failed changed false failed true invocation module name npm module stderr traceback most recent call last n file tmp ansible ansible module npm py line in n main n file tmp ansible ansible module npm py line in main n outdated npm list outdated n file tmp ansible ansible module npm py line in list outdated n pkg other re split s dep nvalueerror need more than value to unpack n module stdout msg module failure | 1 |
276,113 | 8,584,376,912 | IssuesEvent | 2018-11-13 22:34:26 | kristinbranson/APT | https://api.github.com/repos/kristinbranson/APT | opened | gui nicety: tracking montage, descending err | lowpriority | the tracking montage, descending err figures that open when GT performance completes need to include mov, frm, and tgt to be useful in finding the frame to review. and yellow text is almost invisible on white background assay. maybe put box behind text so it always has contrast or change to red that would show up on both black and white backgrounds.
i *thought* you could navigate to the frame by clicking on the plot. maybe i just imagined this or its broken now. | 1.0 | gui nicety: tracking montage, descending err - the tracking montage, descending err figures that open when GT performance completes need to include mov, frm, and tgt to be useful in finding the frame to review. and yellow text is almost invisible on white background assay. maybe put box behind text so it always has contrast or change to red that would show up on both black and white backgrounds.
i *thought* you could navigate to the frame by clicking on the plot. maybe i just imagined this or its broken now. | non_main | gui nicety tracking montage descending err the tracking montage descending err figures that open when gt performance completes need to include mov frm and tgt to be useful in finding the frame to review and yellow text is almost invisible on white background assay maybe put box behind text so it always has contrast or change to red that would show up on both black and white backgrounds i thought you could navigate to the frame by clicking on the plot maybe i just imagined this or its broken now | 0 |
64,400 | 18,545,456,090 | IssuesEvent | 2021-10-21 21:27:06 | SeleniumHQ/selenium | https://api.github.com/repos/SeleniumHQ/selenium | closed | [🐛 Bug]: Selenium 4 Dotnet ChromiumDriver extends WebDriver instead of RemoteWebDriver | I-defect needs-triaging | ### What happened?
Why is Dotnet version for Selenium 4 ChromiumDriver class extends WebDriver instead of RemoteWebDriver? The alpha versions of ChromiumDriver were extending RemoteWebDriver. Also, the Java version of ChromiumDriver inherits RemoteWebDriver. Why is the Dotnet version different?
### How can we reproduce the issue?
```shell
RemoteWebDriver driver = new ChromeDriver();
This is throwing type cast error in Selenium 4. It used to work fine in the alpha versions.
```
### Relevant log output
```shell
System.InvalidCastException : Unable to cast object of type 'OpenQA.Selenium.Chrome.ChromeDriver' to type 'OpenQA.Selenium.Remote.RemoteWebDriver'
```
### Operating System
Windows 10
### Selenium version
4.0.0
### What are the browser(s) and version(s) where you see this issue?
Chrome 93
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 93.0.4577.1500
### Are you using Selenium Grid?
_No response_ | 1.0 | [🐛 Bug]: Selenium 4 Dotnet ChromiumDriver extends WebDriver instead of RemoteWebDriver - ### What happened?
Why is Dotnet version for Selenium 4 ChromiumDriver class extends WebDriver instead of RemoteWebDriver? The alpha versions of ChromiumDriver were extending RemoteWebDriver. Also, the Java version of ChromiumDriver inherits RemoteWebDriver. Why is the Dotnet version different?
### How can we reproduce the issue?
```shell
RemoteWebDriver driver = new ChromeDriver();
This is throwing type cast error in Selenium 4. It used to work fine in the alpha versions.
```
### Relevant log output
```shell
System.InvalidCastException : Unable to cast object of type 'OpenQA.Selenium.Chrome.ChromeDriver' to type 'OpenQA.Selenium.Remote.RemoteWebDriver'
```
### Operating System
Windows 10
### Selenium version
4.0.0
### What are the browser(s) and version(s) where you see this issue?
Chrome 93
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 93.0.4577.1500
### Are you using Selenium Grid?
_No response_ | non_main | selenium dotnet chromiumdriver extends webdriver instead of remotewebdriver what happened why is dotnet version for selenium chromiumdriver class extends webdriver instead of remotewebdriver the alpha versions of chromiumdriver were extending remotewebdriver also the java version of chromiumdriver inherits remotewebdriver why is the dotnet version different how can we reproduce the issue shell remotewebdriver driver new chromedriver this is throwing type cast error in selenium it used to work fine in the alpha versions relevant log output shell system invalidcastexception unable to cast object of type openqa selenium chrome chromedriver to type openqa selenium remote remotewebdriver operating system windows selenium version what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no response | 0 |
554,059 | 16,388,271,624 | IssuesEvent | 2021-05-17 13:18:38 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | support.mozilla.org - site is not usable | browser-firefox-ios os-ios priority-important | <!-- @browser: Firefox iOS 33.1 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.1 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/74053 -->
**URL**: https://support.mozilla.org/en-US/kb/whats-new-firefox-ios-version-33?as=u
**Browser / Version**: Firefox iOS 33.1
**Operating System**: iOS 14.4.2
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Problems with Captcha
**Steps to Reproduce**:
Too many captchas required to access.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | support.mozilla.org - site is not usable - <!-- @browser: Firefox iOS 33.1 -->
<!-- @ua_header: Mozilla/5.0 (iPhone; CPU OS 14_4_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/33.1 Mobile/15E148 Safari/605.1.15 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/74053 -->
**URL**: https://support.mozilla.org/en-US/kb/whats-new-firefox-ios-version-33?as=u
**Browser / Version**: Firefox iOS 33.1
**Operating System**: iOS 14.4.2
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Problems with Captcha
**Steps to Reproduce**:
Too many captchas required to access.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_main | support mozilla org site is not usable url browser version firefox ios operating system ios tested another browser yes other problem type site is not usable description problems with captcha steps to reproduce too many captchas required to access browser configuration none from with ❤️ | 0 |
105,469 | 23,056,704,285 | IssuesEvent | 2022-07-25 05:46:26 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | Handle gzip compressed files without .gz extension | bug imported from old code repo priority: Medium import | _Original author: tfmorris (March 08, 2012 00:28:29)_
Decompress Google Earth kmz files.
More generally, perhaps we should be sniffing the file to see if it's compressed (similar to the Unix file command using /etc/magic) or attempting to decompress all files so that we don't have to depend on a list of file extensions.
_Original issue: http://code.google.com/p/google-refine/issues/detail?id=547_
| 1.0 | Handle gzip compressed files without .gz extension - _Original author: tfmorris (March 08, 2012 00:28:29)_
Decompress Google Earth kmz files.
More generally, perhaps we should be sniffing the file to see if it's compressed (similar to the Unix file command using /etc/magic) or attempting to decompress all files so that we don't have to depend on a list of file extensions.
_Original issue: http://code.google.com/p/google-refine/issues/detail?id=547_
| non_main | handle gzip compressed files without gz extension original author tfmorris march decompress google earth kmz files more generally perhaps we should be sniffing the file to see if it s compressed similar to the unix file command using etc magic or attempting to decompress all files so that we don t have to depend on a list of file extensions original issue | 0 |
157,994 | 6,020,049,534 | IssuesEvent | 2017-06-07 15:39:54 | honestbleeps/Reddit-Enhancement-Suite | https://api.github.com/repos/honestbleeps/Reddit-Enhancement-Suite | closed | Subreddit Manager: display relative dates for "last visited" | Difficulty-1_Easy Priority-4_Some Interest RE-Enhancement RE-Request | ...and then show the full date we display now in hover text.
https://www.reddit.com/message/messages/8dcnqx (only works for me) | 1.0 | Subreddit Manager: display relative dates for "last visited" - ...and then show the full date we display now in hover text.
https://www.reddit.com/message/messages/8dcnqx (only works for me) | non_main | subreddit manager display relative dates for last visited and then show the full date we display now in hover text only works for me | 0 |
281,683 | 21,315,427,127 | IssuesEvent | 2022-04-16 07:25:09 | Denniszedead/pe | https://api.github.com/repos/Denniszedead/pe | opened | Implementation for find command missing | type.DocumentationBug severity.Low | 
There is no implementation for the find command.
<!--session: 1650088129667-4c09a4b0-f002-4cde-8fe0-5f0a90ff3d26-->
<!--Version: Web v3.4.2--> | 1.0 | Implementation for find command missing - 
There is no implementation for the find command.
<!--session: 1650088129667-4c09a4b0-f002-4cde-8fe0-5f0a90ff3d26-->
<!--Version: Web v3.4.2--> | non_main | implementation for find command missing there is no implementation for the find command | 0 |
235,894 | 7,743,924,204 | IssuesEvent | 2018-05-29 14:07:53 | HXLStandard/libhxl-python | https://api.github.com/repos/HXLStandard/libhxl-python | reopened | URL rewriting failing for Google sheet | priority:high status:rejected type:bug | URL rewriting for Google Sheets is failing for this URL, from Elliot McBride:
https://docs.google.com/document/d/1AR2o47jkCn5I2IF-ybWFp_5YlBwVoR2VE6cs9mqnMZI/edit?usp=sharing
This redirects to
https://docs.google.com/spreadsheets/d/1D3s2Ct9Jl0CzrNZw1OIcRaH1UA6xd2_MBksKRkHJhD4/edit#gid=1913543725
(which the HXL Proxy is able to open). | 1.0 | URL rewriting failing for Google sheet - URL rewriting for Google Sheets is failing for this URL, from Elliot McBride:
https://docs.google.com/document/d/1AR2o47jkCn5I2IF-ybWFp_5YlBwVoR2VE6cs9mqnMZI/edit?usp=sharing
This redirects to
https://docs.google.com/spreadsheets/d/1D3s2Ct9Jl0CzrNZw1OIcRaH1UA6xd2_MBksKRkHJhD4/edit#gid=1913543725
(which the HXL Proxy is able to open). | non_main | url rewriting failing for google sheet url rewriting for google sheets is failing for this url from elliot mcbride this redirects to which the hxl proxy is able to open | 0 |
507 | 3,863,935,235 | IssuesEvent | 2016-04-08 11:46:48 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | closed | Processor should generate the SAME GamlAdditions for everyone | > Enhancement > Question Affects Maintainability | This is an open question, but I'm sure that generating the same GamlAdditions files for everyone should help a lot to all have the same errors. I just found an error with Damien, he was calling the operator "text_file", and the operator "url_file" was called instead (both of them were expecting a "txt" extension). I had not the error, because my processor generates the operator "url_file" before the "text_file" (in Damien's computer, it was the opposite order).
I'm sure that a lot of errors like this one are currently existing, and if we all have different errors, it's much more difficult to track them down. | True | Processor should generate the SAME GamlAdditions for everyone - This is an open question, but I'm sure that generating the same GamlAdditions files for everyone should help a lot to all have the same errors. I just found an error with Damien, he was calling the operator "text_file", and the operator "url_file" was called instead (both of them were expecting a "txt" extension). I had not the error, because my processor generates the operator "url_file" before the "text_file" (in Damien's computer, it was the opposite order).
I'm sure that a lot of errors like this one are currently existing, and if we all have different errors, it's much more difficult to track them down. | main | processor should generate the same gamladditions for everyone this is an open question but i m sure that generating the same gamladditions files for everyone should help a lot to all have the same errors i just found an error with damien he was calling the operator text file and the operator url file was called instead both of them were expecting a txt extension i had not the error because my processor generates the operator url file before the text file in damien s computer it was the opposite order i m sure that a lot of errors like this one are currently existing and if we all have different errors it s much more difficult to track them down | 1 |
39,438 | 16,014,788,563 | IssuesEvent | 2021-04-20 14:49:22 | cityofaustin/atd-data-tech | https://api.github.com/repos/cityofaustin/atd-data-tech | closed | Data Migration | Transform IMPD data to fit moped_proj_personnel table in Moped | Product: Moped Project: Moped v1.0 Service: Dev | [ETL details](https://docs.google.com/spreadsheets/d/1S8BiXf_H09epwPFI9VxeSgmQ6Jp25U-tWDJUlpzq0PM/edit?usp=sharing)
- [x] Join moped_roles with IMPD data and then insert into moped_proj_personnel | 1.0 | Data Migration | Transform IMPD data to fit moped_proj_personnel table in Moped - [ETL details](https://docs.google.com/spreadsheets/d/1S8BiXf_H09epwPFI9VxeSgmQ6Jp25U-tWDJUlpzq0PM/edit?usp=sharing)
- [x] Join moped_roles with IMPD data and then insert into moped_proj_personnel | non_main | data migration transform impd data to fit moped proj personnel table in moped join moped roles with impd data and then insert into moped proj personnel | 0 |
1,564 | 6,572,257,756 | IssuesEvent | 2017-09-11 00:42:15 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Incorrect documentation: osx_defaults does not allow reading values | affects_2.1 docs_report waiting_on_maintainer | Copying from https://github.com/ansible/ansible/issues/16455
##### ISSUE TYPE
- Documentation Report
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file = /Users/kevin/wizardbox/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
$ cat ansible.cfg
[defaults]
roles_path = ./data/ansible/roles
```
##### OS / ENVIRONMENT
OS X, locally
##### SUMMARY
http://docs.ansible.com/ansible/osx_defaults_module.html says that the module can be used to read from defaults, but it can't. It can only write and update. There are no read examples, and `value` is required when `state=present`.
##### STEPS TO REPRODUCE
N/A
<!--- Paste example playbooks or commands between quotes below -->
```
- name: check if spelling correction is enabled
osx_defaults: key={{ item }}
with_items:
- NSAutomaticSpellingCorrectionEnabled
- WebAutomaticSpellingCorrectionEnabled
register: spelling_status
```
##### EXPECTED RESULTS
The defaults to be registered into a variable.
##### ACTUAL RESULTS
<!--- Paste verbatim command output between quotes below -->
```
TASK [check if spelling correction is enabled] *********************************
failed: [localhost] (item=NSAutomaticSpellingCorrectionEnabled) => {"failed": true, "item": "NSAutomaticSpellingCorrectionEnabled", "msg": "Missing value parameter"}
failed: [localhost] (item=WebAutomaticSpellingCorrectionEnabled) => {"failed": true, "item": "WebAutomaticSpellingCorrectionEnabled", "msg": "Missing value parameter"}
```
| True | Incorrect documentation: osx_defaults does not allow reading values - Copying from https://github.com/ansible/ansible/issues/16455
##### ISSUE TYPE
- Documentation Report
##### ANSIBLE VERSION
```
ansible 2.1.0.0
config file = /Users/kevin/wizardbox/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
$ cat ansible.cfg
[defaults]
roles_path = ./data/ansible/roles
```
##### OS / ENVIRONMENT
OS X, locally
##### SUMMARY
http://docs.ansible.com/ansible/osx_defaults_module.html says that the module can be used to read from defaults, but it can't. It can only write and update. There are no read examples, and `value` is required when `state=present`.
##### STEPS TO REPRODUCE
N/A
<!--- Paste example playbooks or commands between quotes below -->
```
- name: check if spelling correction is enabled
osx_defaults: key={{ item }}
with_items:
- NSAutomaticSpellingCorrectionEnabled
- WebAutomaticSpellingCorrectionEnabled
register: spelling_status
```
##### EXPECTED RESULTS
The defaults to be registered into a variable.
##### ACTUAL RESULTS
<!--- Paste verbatim command output between quotes below -->
```
TASK [check if spelling correction is enabled] *********************************
failed: [localhost] (item=NSAutomaticSpellingCorrectionEnabled) => {"failed": true, "item": "NSAutomaticSpellingCorrectionEnabled", "msg": "Missing value parameter"}
failed: [localhost] (item=WebAutomaticSpellingCorrectionEnabled) => {"failed": true, "item": "WebAutomaticSpellingCorrectionEnabled", "msg": "Missing value parameter"}
```
| main | incorrect documentation osx defaults does not allow reading values copying from issue type documentation report ansible version ansible config file users kevin wizardbox ansible cfg configured module search path default w o overrides configuration cat ansible cfg roles path data ansible roles os environment os x locally summary says that the module can be used to read from defaults but it can t it can only write and update there are no read examples and value is required when state present steps to reproduce n a name check if spelling correction is enabled osx defaults key item with items nsautomaticspellingcorrectionenabled webautomaticspellingcorrectionenabled register spelling status expected results the defaults to be registered into a variable actual results task failed item nsautomaticspellingcorrectionenabled failed true item nsautomaticspellingcorrectionenabled msg missing value parameter failed item webautomaticspellingcorrectionenabled failed true item webautomaticspellingcorrectionenabled msg missing value parameter | 1 |
936 | 4,650,396,360 | IssuesEvent | 2016-10-03 03:55:58 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | system/puppet.py check if puppet is disabled does not work | affects_2.2 bug_report waiting_on_maintainer | type: bug report
version: git devel as of today
summary: in line 139 of a check tries to find out if puppet is disabled. it runs PUPPET_CMD + " config print agent_disabled_lockfile". If that failes with rc != 0, the module fails, too. In my setup this always fails. My current guess is, that not all puppet versions support that parameters and fail for that reason. I will try to investigate further. I removed that last elif rc != 0: to make it working in my setup, i dont see a nice general solution yet. Maybe I find another way to determine the disabled status. | True | system/puppet.py check if puppet is disabled does not work - type: bug report
version: git devel as of today
summary: in line 139 of a check tries to find out if puppet is disabled. it runs PUPPET_CMD + " config print agent_disabled_lockfile". If that failes with rc != 0, the module fails, too. In my setup this always fails. My current guess is, that not all puppet versions support that parameters and fail for that reason. I will try to investigate further. I removed that last elif rc != 0: to make it working in my setup, i dont see a nice general solution yet. Maybe I find another way to determine the disabled status. | main | system puppet py check if puppet is disabled does not work type bug report version git devel as of today summary in line of a check tries to find out if puppet is disabled it runs puppet cmd config print agent disabled lockfile if that failes with rc the module fails too in my setup this always fails my current guess is that not all puppet versions support that parameters and fail for that reason i will try to investigate further i removed that last elif rc to make it working in my setup i dont see a nice general solution yet maybe i find another way to determine the disabled status | 1 |
322 | 2,495,349,517 | IssuesEvent | 2015-01-06 10:09:07 | beautyjoy/llab | https://api.github.com/repos/beautyjoy/llab | closed | Remove Redundant '=true' from URLs | enhancement priority 2 - soon | Llab based URLs are pretty ugly....we could simplify things a lot, so that if a `novideo` tag is present in a URL, the `=true` part is implied since `false` is the default state.
@peterasujan Would this break anything significantly? I don't think so...right? | 1.0 | Remove Redundant '=true' from URLs - Llab based URLs are pretty ugly....we could simplify things a lot, so that if a `novideo` tag is present in a URL, the `=true` part is implied since `false` is the default state.
@peterasujan Would this break anything significantly? I don't think so...right? | non_main | remove redundant true from urls llab based urls are pretty ugly we could simplify things a lot so that if a novideo tag is present in a url the true part is implied since false is the default state peterasujan would this break anything significantly i don t think so right | 0 |
4,685 | 24,197,552,564 | IssuesEvent | 2022-09-24 04:46:04 | espeak-ng/espeak-ng | https://api.github.com/repos/espeak-ng/espeak-ng | closed | Create language specification files | feature maintainability | These files should use the BCP47 code for the language (`en-GB-scotland`, `pt-BR`, `da`, etc.) and should be separate from the voice files. | True | Create language specification files - These files should use the BCP47 code for the language (`en-GB-scotland`, `pt-BR`, `da`, etc.) and should be separate from the voice files. | main | create language specification files these files should use the code for the language en gb scotland pt br da etc and should be separate from the voice files | 1 |
5,709 | 30,180,320,221 | IssuesEvent | 2023-07-04 08:27:34 | ipfs/helia | https://api.github.com/repos/ipfs/helia | closed | Is there a way to use Infura's IPFS service using Helia? | need/maintainers-input | With `http-ipfs-client`, you could push data to Infura's IPFS servers. Is this still possible with Helia? If yes, how can this be done? | True | Is there a way to use Infura's IPFS service using Helia? - With `http-ipfs-client`, you could push data to Infura's IPFS servers. Is this still possible with Helia? If yes, how can this be done? | main | is there a way to use infura s ipfs service using helia with http ipfs client you could push data to infura s ipfs servers is this still possible with helia if yes how can this be done | 1 |
3,270 | 12,488,877,966 | IssuesEvent | 2020-05-31 16:13:13 | coq-community/manifesto | https://api.github.com/repos/coq-community/manifesto | opened | Change maintainer of project Stalmarck | change-maintainer maintainer-wanted | **Project name and URL:** https://github.com/coq-community/stalmarck
**Current maintainer:** @herbelin
**Status:** maintained
**New maintainer:** looking for a volunteer
@herbelin is currently maintaining this project (and also qarith-stern-brocot and bertrand), but because he's also the one taking care of all the remaining coq-contribs, he'd gladly accept if someone else wanted to take over this package (or one of the other two I suppose). Cf. https://github.com/coq-community/stalmarck/pull/12#issuecomment-636029984. | True | Change maintainer of project Stalmarck - **Project name and URL:** https://github.com/coq-community/stalmarck
**Current maintainer:** @herbelin
**Status:** maintained
**New maintainer:** looking for a volunteer
@herbelin is currently maintaining this project (and also qarith-stern-brocot and bertrand), but because he's also the one taking care of all the remaining coq-contribs, he'd gladly accept if someone else wanted to take over this package (or one of the other two I suppose). Cf. https://github.com/coq-community/stalmarck/pull/12#issuecomment-636029984. | main | change maintainer of project stalmarck project name and url current maintainer herbelin status maintained new maintainer looking for a volunteer herbelin is currently maintaining this project and also qarith stern brocot and bertrand but because he s also the one taking care of all the remaining coq contribs he d gladly accept if someone else wanted to take over this package or one of the other two i suppose cf | 1 |
32,075 | 12,061,780,616 | IssuesEvent | 2020-04-16 00:55:49 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Use correct OpenSSL libraries for FreeBSD | area-System.Security os-freebsd untriaged | src/libraries/Native/Unix/System.Security.Cryptography.Native/opensslshim.c
`OpenLibrary()` doesn't currently find the correct libraries on my FreeBSD 11.3 build box.
The version of OpenSSL included in the base 11.3 install is 1.0.2
```
[jason@freebsd11 ~/src/runtime]$ /usr/bin/openssl version
OpenSSL 1.0.2s-freebsd 28 May 2019
[jason@freebsd11 ~/src/runtime]$ ldd /usr/bin/openssl
/usr/bin/openssl:
libssl.so.8 => /usr/lib/libssl.so.8 (0x8008a4000)
libcrypto.so.8 => /lib/libcrypto.so.8 (0x800c00000)
libc.so.7 => /lib/libc.so.7 (0x801076000)
```
OpenSSL 1.1.1 can be installed with the FreeBSD package manager
```
[jason@freebsd11 ~/src/runtime]$ /usr/local/bin/openssl version
OpenSSL 1.1.1f 31 Mar 2020
[jason@freebsd11 ~/src/runtime]$ ldd /usr/local/bin/openssl
/usr/local/bin/openssl:
libssl.so.11 => /usr/local/lib/libssl.so.11 (0x8008b7000)
libcrypto.so.11 => /usr/local/lib/libcrypto.so.11 (0x800c00000)
libthr.so.3 => /lib/libthr.so.3 (0x8010ef000)
libc.so.7 => /lib/libc.so.7 (0x801317000)
```
`OpenLibrary()` needs to look for `libssl.so.11` and `libssl.so.8` to support these versions.
@wfurt | True | Use correct OpenSSL libraries for FreeBSD - src/libraries/Native/Unix/System.Security.Cryptography.Native/opensslshim.c
`OpenLibrary()` doesn't currently find the correct libraries on my FreeBSD 11.3 build box.
The version of OpenSSL included in the base 11.3 install is 1.0.2
```
[jason@freebsd11 ~/src/runtime]$ /usr/bin/openssl version
OpenSSL 1.0.2s-freebsd 28 May 2019
[jason@freebsd11 ~/src/runtime]$ ldd /usr/bin/openssl
/usr/bin/openssl:
libssl.so.8 => /usr/lib/libssl.so.8 (0x8008a4000)
libcrypto.so.8 => /lib/libcrypto.so.8 (0x800c00000)
libc.so.7 => /lib/libc.so.7 (0x801076000)
```
OpenSSL 1.1.1 can be installed with the FreeBSD package manager
```
[jason@freebsd11 ~/src/runtime]$ /usr/local/bin/openssl version
OpenSSL 1.1.1f 31 Mar 2020
[jason@freebsd11 ~/src/runtime]$ ldd /usr/local/bin/openssl
/usr/local/bin/openssl:
libssl.so.11 => /usr/local/lib/libssl.so.11 (0x8008b7000)
libcrypto.so.11 => /usr/local/lib/libcrypto.so.11 (0x800c00000)
libthr.so.3 => /lib/libthr.so.3 (0x8010ef000)
libc.so.7 => /lib/libc.so.7 (0x801317000)
```
`OpenLibrary()` needs to look for `libssl.so.11` and `libssl.so.8` to support these versions.
@wfurt | non_main | use correct openssl libraries for freebsd src libraries native unix system security cryptography native opensslshim c openlibrary doesn t currently find the correct libraries on my freebsd build box the version of openssl included in the base install is usr bin openssl version openssl freebsd may ldd usr bin openssl usr bin openssl libssl so usr lib libssl so libcrypto so lib libcrypto so libc so lib libc so openssl can be installed with the freebsd package manager usr local bin openssl version openssl mar ldd usr local bin openssl usr local bin openssl libssl so usr local lib libssl so libcrypto so usr local lib libcrypto so libthr so lib libthr so libc so lib libc so openlibrary needs to look for libssl so and libssl so to support these versions wfurt | 0 |
161,776 | 25,399,645,359 | IssuesEvent | 2022-11-22 11:05:56 | DeveloperAcademy-POSTECH/MacC-TEAM-8bit | https://api.github.com/repos/DeveloperAcademy-POSTECH/MacC-TEAM-8bit | opened | [design] MyActivityView 린다 디자인 피드백 반영 | design asset | ### 작업에 대한 간단한 설명
- MyActivityView 린다의 디자인 피드백을 반영합니다.
### TODO
- [ ] Card 내 Padding이 통일되도록 수정
- [ ] Chart 내 세로선 추가
- [ ] Chart 내 점선 추가
### 작업 일정(Optional)
2022.00.00 - 2022.00.00 (작업일자)
|작업일자|작업 내용|
|:---:|:---:|
|2022.00.00|SequenceNumber|
### Optional
- 작업 전 숙지 사항
### 이슈 제안자(Optional)
- @인물 멘션
- 이슈 담당자 ≠ 작업 담당자의 케이스에 명시해주세요.
| 1.0 | [design] MyActivityView 린다 디자인 피드백 반영 - ### 작업에 대한 간단한 설명
- MyActivityView 린다의 디자인 피드백을 반영합니다.
### TODO
- [ ] Card 내 Padding이 통일되도록 수정
- [ ] Chart 내 세로선 추가
- [ ] Chart 내 점선 추가
### 작업 일정(Optional)
2022.00.00 - 2022.00.00 (작업일자)
|작업일자|작업 내용|
|:---:|:---:|
|2022.00.00|SequenceNumber|
### Optional
- 작업 전 숙지 사항
### 이슈 제안자(Optional)
- @인물 멘션
- 이슈 담당자 ≠ 작업 담당자의 케이스에 명시해주세요.
| non_main | myactivityview 린다 디자인 피드백 반영 작업에 대한 간단한 설명 myactivityview 린다의 디자인 피드백을 반영합니다 todo card 내 padding이 통일되도록 수정 chart 내 세로선 추가 chart 내 점선 추가 작업 일정 optional 작업일자 작업일자 작업 내용 sequencenumber optional 작업 전 숙지 사항 이슈 제안자 optional 인물 멘션 이슈 담당자 ≠ 작업 담당자의 케이스에 명시해주세요 | 0 |
20,478 | 10,521,091,223 | IssuesEvent | 2019-09-30 04:28:08 | scxbush/bushnodegoat | https://api.github.com/repos/scxbush/bushnodegoat | opened | CVE-2019-10746 (High) detected in mixin-deep-1.3.1.tgz | security vulnerability | ## CVE-2019-10746 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mixin-deep-1.3.1.tgz</b></p></summary>
<p>Deeply mix the properties of objects into the first object. Like merge-deep, but doesn't clone.</p>
<p>Library home page: <a href="https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz">https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/bushnodegoat/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/bushnodegoat/node_modules/mixin-deep/package.json</p>
<p>
Dependency Hierarchy:
- grunt-cli-1.3.2.tgz (Root Library)
- liftoff-2.5.0.tgz
- findup-sync-2.0.0.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- :x: **mixin-deep-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scxbush/bushnodegoat/commit/7b8d825ae251631caa57f9bd5ab987c3c4e39eaa">7b8d825ae251631caa57f9bd5ab987c3c4e39eaa</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
mixin-deep is vulnerable to Prototype Pollution in versions before 1.3.2 and version 2.0.0. The function mixin-deep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-08-23
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10746>CVE-2019-10746</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9">https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9</a></p>
<p>Release Date: 2019-07-11</p>
<p>Fix Resolution: 1.3.2</p>
</p>
</details>
<p></p>
| True | CVE-2019-10746 (High) detected in mixin-deep-1.3.1.tgz - ## CVE-2019-10746 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mixin-deep-1.3.1.tgz</b></p></summary>
<p>Deeply mix the properties of objects into the first object. Like merge-deep, but doesn't clone.</p>
<p>Library home page: <a href="https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz">https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/bushnodegoat/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/bushnodegoat/node_modules/mixin-deep/package.json</p>
<p>
Dependency Hierarchy:
- grunt-cli-1.3.2.tgz (Root Library)
- liftoff-2.5.0.tgz
- findup-sync-2.0.0.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- :x: **mixin-deep-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scxbush/bushnodegoat/commit/7b8d825ae251631caa57f9bd5ab987c3c4e39eaa">7b8d825ae251631caa57f9bd5ab987c3c4e39eaa</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
mixin-deep is vulnerable to Prototype Pollution in versions before 1.3.2 and version 2.0.0. The function mixin-deep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-08-23
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10746>CVE-2019-10746</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9">https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9</a></p>
<p>Release Date: 2019-07-11</p>
<p>Fix Resolution: 1.3.2</p>
</p>
</details>
<p></p>
| non_main | cve high detected in mixin deep tgz cve high severity vulnerability vulnerable library mixin deep tgz deeply mix the properties of objects into the first object like merge deep but doesn t clone library home page a href path to dependency file tmp ws scm bushnodegoat package json path to vulnerable library tmp ws scm bushnodegoat node modules mixin deep package json dependency hierarchy grunt cli tgz root library liftoff tgz findup sync tgz micromatch tgz snapdragon tgz base tgz x mixin deep tgz vulnerable library found in head commit a href vulnerability details mixin deep is vulnerable to prototype pollution in versions before and version the function mixin deep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
3,109 | 11,868,526,820 | IssuesEvent | 2020-03-26 09:21:28 | chocolatey-community/chocolatey-package-requests | https://api.github.com/repos/chocolatey-community/chocolatey-package-requests | closed | RFM - Software Ideas Modeler | Status: Available For Maintainer(s) | ## Current Maintainer
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://chocolatey.org/packages/software-ideas-modeler
Package source URL: https://github.com/abejenaru/chocolatey-packages/tree/master/automatic/software-ideas-modeler
| True | RFM - Software Ideas Modeler - ## Current Maintainer
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://chocolatey.org/packages/software-ideas-modeler
Package source URL: https://github.com/abejenaru/chocolatey-packages/tree/master/automatic/software-ideas-modeler
| main | rfm software ideas modeler current maintainer i am the maintainer of the package and wish to pass it to someone else checklist issue title starts with rfm existing package details package url package source url | 1 |
7,796 | 11,045,681,364 | IssuesEvent | 2019-12-09 15:31:53 | marco-buttu/suricate | https://api.github.com/repos/marco-buttu/suricate | closed | Verify if ACS is running | requirement | In `suricate-server` and when getting the component or properties, check if ACS is running (from `Acspy.Utils.ACSCorba.getManager()`). If `getManager()` returns `None`, ACS is not running. | 1.0 | Verify if ACS is running - In `suricate-server` and when getting the component or properties, check if ACS is running (from `Acspy.Utils.ACSCorba.getManager()`). If `getManager()` returns `None`, ACS is not running. | non_main | verify if acs is running in suricate server and when getting the component or properties check if acs is running from acspy utils acscorba getmanager if getmanager returns none acs is not running | 0 |
269,374 | 8,435,247,834 | IssuesEvent | 2018-10-17 12:40:18 | CS2113-AY1819S1-T12-2/main | https://api.github.com/repos/CS2113-AY1819S1-T12-2/main | opened | As a doctor, I want to filter patients by more than 1 category so that I can have a list that is more thoroughly sorted out. | priority.high | - find tag by subsequent tags as well instead of only the first tag of the person
- ensure sort works
-more keywords | 1.0 | As a doctor, I want to filter patients by more than 1 category so that I can have a list that is more thoroughly sorted out. - - find tag by subsequent tags as well instead of only the first tag of the person
- ensure sort works
-more keywords | non_main | as a doctor i want to filter patients by more than category so that i can have a list that is more thoroughly sorted out find tag by subsequent tags as well instead of only the first tag of the person ensure sort works more keywords | 0 |
1,012 | 4,792,875,570 | IssuesEvent | 2016-10-31 16:38:48 | DynamoRIO/dynamorio | https://api.github.com/repos/DynamoRIO/dynamorio | closed | ARM crash report register list is truncated at r9 | Maintainability OpSys-ARM | From #2047:
```
<Application /home/derek/dr/git/build_suite/build_release-external-32/suite/tests/bin/linux.signal0011 (23424). DynamoRIO internal crash at PC 0x47f79bcc. Please report this at http://dynamorio.org/issues/. Program aborted.
Received SIGSEGV at generated pc 0x47f79bcc in thread 23424
Base: 0x71000000
Registers: r0 =0x47fb1940 r1 =0x7f831a40 r2 =0xb6f986b1 r3 =0x00000000
r4 =0xb6fb6048 r5 =0xbe8e8520 r6 =0xb6f97000 r7 =0xbe8e82c0
r8 =0xbe8e8590 r9
version 6.2.17102, custom build
-no_dynamic_options -code_api -msgbox_mask 12 -stderr_mask 15 -stack_size 56K -max_elide_jmp 0 -max_elide_call 0 -early_inject -emulate_brk -no_inline_ignored_syscalls -native_exec_default_list '' -no_native_exec_managed_code -no_indcall2direct
0xb6f9733c 0x0000fba1>
```
We need to increase the crash report buffer size for A32.
| True | ARM crash report register list is truncated at r9 - From #2047:
```
<Application /home/derek/dr/git/build_suite/build_release-external-32/suite/tests/bin/linux.signal0011 (23424). DynamoRIO internal crash at PC 0x47f79bcc. Please report this at http://dynamorio.org/issues/. Program aborted.
Received SIGSEGV at generated pc 0x47f79bcc in thread 23424
Base: 0x71000000
Registers: r0 =0x47fb1940 r1 =0x7f831a40 r2 =0xb6f986b1 r3 =0x00000000
r4 =0xb6fb6048 r5 =0xbe8e8520 r6 =0xb6f97000 r7 =0xbe8e82c0
r8 =0xbe8e8590 r9
version 6.2.17102, custom build
-no_dynamic_options -code_api -msgbox_mask 12 -stderr_mask 15 -stack_size 56K -max_elide_jmp 0 -max_elide_call 0 -early_inject -emulate_brk -no_inline_ignored_syscalls -native_exec_default_list '' -no_native_exec_managed_code -no_indcall2direct
0xb6f9733c 0x0000fba1>
```
We need to increase the crash report buffer size for A32.
| main | arm crash report register list is truncated at from application home derek dr git build suite build release external suite tests bin linux dynamorio internal crash at pc please report this at program aborted received sigsegv at generated pc in thread base registers version custom build no dynamic options code api msgbox mask stderr mask stack size max elide jmp max elide call early inject emulate brk no inline ignored syscalls native exec default list no native exec managed code no we need to increase the crash report buffer size for | 1 |
102,922 | 4,162,530,121 | IssuesEvent | 2016-06-17 20:46:44 | Theophilix/event-table-edit | https://api.github.com/repos/Theophilix/event-table-edit | closed | Backend: appointment function: set booking time limit in general and for each cell in CSV | enhancement low priority | Sometimes, you do not want to have people book appointments too spontaneously. So there must be a time limit (x hours). The table should only show periods/times as "free" when they are within the time limit. That way, a person can only book appointment when the user time is x hours before the selected time period (we have to respect timezones here).
In a CSV there should be an option to put the time limit into brackets: free[12]. | 1.0 | Backend: appointment function: set booking time limit in general and for each cell in CSV - Sometimes, you do not want to have people book appointments too spontaneously. So there must be a time limit (x hours). The table should only show periods/times as "free" when they are within the time limit. That way, a person can only book appointment when the user time is x hours before the selected time period (we have to respect timezones here).
In a CSV there should be an option to put the time limit into brackets: free[12]. | non_main | backend appointment function set booking time limit in general and for each cell in csv sometimes you do not want to have people book appointments too spontaneously so there must be a time limit x hours the table should only show periods times as free when they are within the time limit that way a person can only book appointment when the user time is x hours before the selected time period we have to respect timezones here in a csv there should be an option to put the time limit into brackets free | 0 |
1,376 | 5,957,017,105 | IssuesEvent | 2017-05-28 22:11:05 | OpenLightingProject/ola | https://api.github.com/repos/OpenLightingProject/ola | closed | OLA fails to build with GCC 6.2.x | Difficulty-Medium Language-C++ Maintainability OpSys-Linux | Hi, the QLC+ project uses the OpenSUSE build service to automate builds for several Linux distros.
Recently I have updated the targeted distros to support a let's say 3 years window, but found that GCC 6.2 is not happy about OLA.
In particular the `common/file/Util.cpp` file.
As there haven't been changes since 5 months on that file, I decided to open this.
You can find all the related build logs here: https://build.opensuse.org/package/show/home:mcallegari79/ola
If you need more info, I'm here | True | OLA fails to build with GCC 6.2.x - Hi, the QLC+ project uses the OpenSUSE build service to automate builds for several Linux distros.
Recently I have updated the targeted distros to support a let's say 3 years window, but found that GCC 6.2 is not happy about OLA.
In particular the `common/file/Util.cpp` file.
As there haven't been changes since 5 months on that file, I decided to open this.
You can find all the related build logs here: https://build.opensuse.org/package/show/home:mcallegari79/ola
If you need more info, I'm here | main | ola fails to build with gcc x hi the qlc project uses the opensuse build service to automate builds for several linux distros recently i have updated the targeted distros to support a let s say years window but found that gcc is not happy about ola in particular the common file util cpp file as there haven t been changes since months on that file i decided to open this you can find all the related build logs here if you need more info i m here | 1 |
1,018 | 4,804,764,435 | IssuesEvent | 2016-11-02 14:28:05 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | DNF module not able to install existing group | affects_2.0 bot_broken bug_report feature_idea in progress waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
dnf module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes -->
```
"ansible 2.0.2.0"
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
no changes to default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
fedora 23
##### SUMMARY
<!--- Explain the problem briefly -->
when trying to install a package group "Xfce desktop" using ansible it fails, while it runs normally using the dnf groupinstall "Xfce desktop" command.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
run command:
ansible-playbook -i inventory -b --ask-become-pass testserver.yml
task for installing Xfce desktop:
```
- name: Ensure XFCE is installed
dnf:
name="@Xfce desktop"
state=present
```
reproducible: always
see above
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
That the package group "Xfce desktop" is installed without errors using the ansible dnf module.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
Fatal error.
```
"fatal: [testserver]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"conf_file": null, "disable_gpg_check": false, "disablerepo": [], "enablerepo": [], "list": null, "name": ["Xfce desktop"], "state": "present"}, "module_name": "dnf"}, "msg": "No package Xfce desktop available."}"
```
| True | DNF module not able to install existing group - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
dnf module
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes -->
```
"ansible 2.0.2.0"
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
no changes to default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
fedora 23
##### SUMMARY
<!--- Explain the problem briefly -->
when trying to install a package group "Xfce desktop" using ansible it fails, while it runs normally using the dnf groupinstall "Xfce desktop" command.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
run command:
ansible-playbook -i inventory -b --ask-become-pass testserver.yml
task for installing Xfce desktop:
```
- name: Ensure XFCE is installed
dnf:
name="@Xfce desktop"
state=present
```
reproducible: always
see above
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
That the package group "Xfce desktop" is installed without errors using the ansible dnf module.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
Fatal error.
```
"fatal: [testserver]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"conf_file": null, "disable_gpg_check": false, "disablerepo": [], "enablerepo": [], "list": null, "name": ["Xfce desktop"], "state": "present"}, "module_name": "dnf"}, "msg": "No package Xfce desktop available."}"
```
| main | dnf module not able to install existing group issue type bug report component name dnf module ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no changes to default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific fedora summary when trying to install a package group xfce desktop using ansible it fails while it runs normally using the dnf groupinstall xfce desktop command steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used run command ansible playbook i inventory b ask become pass testserver yml task for installing xfce desktop name ensure xfce is installed dnf name xfce desktop state present reproducible always see above expected results that the package group xfce desktop is installed without errors using the ansible dnf module actual results fatal error fatal failed changed false failed true invocation module args conf file null disable gpg check false disablerepo enablerepo list null name state present module name dnf msg no package xfce desktop available | 1 |
400,060 | 27,266,932,047 | IssuesEvent | 2023-02-22 18:48:59 | dhowe/AdNauseam | https://api.github.com/repos/dhowe/AdNauseam | closed | Merge to uBlock Origin 1.45.0 | Documentation | 1.45.0 has the development of the mv3 build, so we can try start testing based on it.
This version has quite a few changes, will need a bit more reviewing to catch all issues. I had to change most of our files now to modules so the i18n can be imported (since it is now also a js module).
Some of the filtering mechanism seems to have changed which is causing some filters not be considered valid for some reason, checking this now. | 1.0 | Merge to uBlock Origin 1.45.0 - 1.45.0 has the development of the mv3 build, so we can try start testing based on it.
This version has quite a few changes, will need a bit more reviewing to catch all issues. I had to change most of our files now to modules so the i18n can be imported (since it is now also a js module).
Some of the filtering mechanism seems to have changed which is causing some filters not be considered valid for some reason, checking this now. | non_main | merge to ublock origin has the development of the build so we can try start testing based on it this version has quite a few changes will need a bit more reviewing to catch all issues i had to change most of our files now to modules so the can be imported since it is now also a js module some of the filtering mechanism seems to have changed which is causing some filters not be considered valid for some reason checking this now | 0 |
4,705 | 24,270,826,796 | IssuesEvent | 2022-09-28 10:07:20 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | SEO | Pages have duplicate content issues | engineering Maintain | This set of pages are duplicates. These pages can be fixed by either:
- Add a rel=""canonical"" link to one of your duplicate pages to inform search engines which page to show in search results or Use a 301 redirect from a duplicate page to the original one"
https://docs.google.com/spreadsheets/d/15HwgpxSYc4Zl809kcebAhLfLYXFuIk8ZP-Qvk3yVV8Q/edit#gid=273843097 | True | SEO | Pages have duplicate content issues - This set of pages are duplicates. These pages can be fixed by either:
- Add a rel=""canonical"" link to one of your duplicate pages to inform search engines which page to show in search results or Use a 301 redirect from a duplicate page to the original one"
https://docs.google.com/spreadsheets/d/15HwgpxSYc4Zl809kcebAhLfLYXFuIk8ZP-Qvk3yVV8Q/edit#gid=273843097 | main | seo pages have duplicate content issues this set of pages are duplicates these pages can be fixed by either add a rel canonical link to one of your duplicate pages to inform search engines which page to show in search results or use a redirect from a duplicate page to the original one | 1 |
103,048 | 22,173,337,919 | IssuesEvent | 2022-06-06 05:05:12 | Brendonovich/prisma-client-rust | https://api.github.com/repos/Brendonovich/prisma-client-rust | closed | Enum defined twice if `@id` is also marked with `@unique` | bug codegen | ```
Compiling glowsquid v0.1.0 (/mnt/BulkStorage/projects/github.com/glowsquid-launcher/glowsquid/apps/oxidize)
error[E0428]: the name `IdEquals` is defined multiple times
--> src/prisma.rs:875:5
|
873 | IdEquals(String),
| ---------------- previous definition of the type `IdEquals` here
874 | UsernameEquals(String),
875 | IdEquals(String),
| ^^^^^^^^^^^^^^^^ `IdEquals` redefined here
|
= note: `IdEquals` must be defined only once in the type namespace of this enum
For more information about this error, try `rustc --explain E0428`.
error: could not compile `glowsquid` due to previous error
[Process exited 0]
```
schema:
```prisma
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "cargo prisma"
output = "../src/prisma.rs"
}
datasource db {
provider = "sqlite"
url = "file:dev.db"
}
/// A minecraft account
model Account {
/// The account's unique id
id String @id @unique // <-- issue is here
/// The account's username
username String @unique
}
``` | 1.0 | Enum defined twice if `@id` is also marked with `@unique` - ```
Compiling glowsquid v0.1.0 (/mnt/BulkStorage/projects/github.com/glowsquid-launcher/glowsquid/apps/oxidize)
error[E0428]: the name `IdEquals` is defined multiple times
--> src/prisma.rs:875:5
|
873 | IdEquals(String),
| ---------------- previous definition of the type `IdEquals` here
874 | UsernameEquals(String),
875 | IdEquals(String),
| ^^^^^^^^^^^^^^^^ `IdEquals` redefined here
|
= note: `IdEquals` must be defined only once in the type namespace of this enum
For more information about this error, try `rustc --explain E0428`.
error: could not compile `glowsquid` due to previous error
[Process exited 0]
```
schema:
```prisma
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "cargo prisma"
output = "../src/prisma.rs"
}
datasource db {
provider = "sqlite"
url = "file:dev.db"
}
/// A minecraft account
model Account {
/// The account's unique id
id String @id @unique // <-- issue is here
/// The account's username
username String @unique
}
``` | non_main | enum defined twice if id is also marked with unique compiling glowsquid mnt bulkstorage projects github com glowsquid launcher glowsquid apps oxidize error the name idequals is defined multiple times src prisma rs idequals string previous definition of the type idequals here usernameequals string idequals string idequals redefined here note idequals must be defined only once in the type namespace of this enum for more information about this error try rustc explain error could not compile glowsquid due to previous error schema prisma this is your prisma schema file learn more about it in the docs generator client provider cargo prisma output src prisma rs datasource db provider sqlite url file dev db a minecraft account model account the account s unique id id string id unique issue is here the account s username username string unique | 0 |
4,262 | 21,261,408,842 | IssuesEvent | 2022-04-13 04:58:37 | aws/aws-sam-cli | https://api.github.com/repos/aws/aws-sam-cli | closed | Error: Error building docker image: pull access denied for LOGICAL_NAME, when using CDK | type/question blocked/close-if-inactive maintainer/need-followup | <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
I have a lambda function that is defined using CDK. However, when I `aws sam invoke`, according to the documentation, I get the above error.
### Steps to reproduce:
I have created a git repository that lets you reproduce this bug: https://github.com/multimeric/SamInvokeCdk
Run the following commands:
```bash
git clone git@github.com:multimeric/SamInvokeCdk.git
cd SamInvokeCdk
npm install
cdk synth
sam local invoke customImageLambdaECCCB1E0 -t cdk.out/TmpWxaZf33UfiStack.template.json
```
### Observed result:
```bash
$ sam local invoke customImageLambdaECCCB1E0 -t cdk.out/TmpWxaZf33UfiStack.template.json --profile greening-web --region us-east-1 --debug
2022-03-15 15:37:46,483 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2022-03-15 15:37:46,484 | Using config file: samconfig.toml, config environment: default
2022-03-15 15:37:46,484 | Expand command line arguments to:
2022-03-15 15:37:46,484 | --template_file=/tmp/tmp.k8vPX9Fk2f/SamInvokeCdk/cdk.out/TmpWxaZf33UfiStack.template.json --function_logical_id=customImageLambdaECCCB1E0 --no_event --layer_cache_basedir=/home/migwell/.aws-sam/layers-pkg --container_host=localhost --container_host_interface=127.0.0.1
2022-03-15 15:37:46,484 | local invoke command is called
2022-03-15 15:37:46,484 | Collected default values for parameters: {'BootstrapVersion': '/cdk-bootstrap/hnb659fds/version'}
2022-03-15 15:37:46,497 | CDK Path for resource customImageLambdaServiceRole02CD8460 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'ServiceRole', 'Resource']
2022-03-15 15:37:46,497 | CDK Path for resource customImageLambdaECCCB1E0 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'Resource']
2022-03-15 15:37:46,497 | CDK Path for resource CDKMetadata is ['TmpWxaZf33UfiStack', 'CDKMetadata', 'Default']
2022-03-15 15:37:46,498 | 3 stacks found in the template
2022-03-15 15:37:46,498 | Collected default values for parameters: {'BootstrapVersion': '/cdk-bootstrap/hnb659fds/version'}
2022-03-15 15:37:46,512 | CDK Path for resource customImageLambdaServiceRole02CD8460 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'ServiceRole', 'Resource']
2022-03-15 15:37:46,512 | CDK Path for resource customImageLambdaECCCB1E0 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'Resource']
2022-03-15 15:37:46,512 | CDK Path for resource CDKMetadata is ['TmpWxaZf33UfiStack', 'CDKMetadata', 'Default']
2022-03-15 15:37:46,513 | 3 resources found in the stack
2022-03-15 15:37:46,513 | Collected default values for parameters: {'BootstrapVersion': '/cdk-bootstrap/hnb659fds/version'}
2022-03-15 15:37:46,527 | CDK Path for resource customImageLambdaServiceRole02CD8460 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'ServiceRole', 'Resource']
2022-03-15 15:37:46,527 | CDK Path for resource customImageLambdaECCCB1E0 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'Resource']
2022-03-15 15:37:46,527 | CDK Path for resource CDKMetadata is ['TmpWxaZf33UfiStack', 'CDKMetadata', 'Default']
2022-03-15 15:37:46,529 | Found Lambda function with name='customImageLambdaECCCB1E0' and Imageuri='customimagelambdaecccb1e0'
2022-03-15 15:37:46,529 | --base-dir is not presented, adjusting uri asset.8ef90e23f9c95edd96726ef3e331815e9cc7076c2317f2b4b370470a8ee84994 relative to /tmp/tmp.k8vPX9Fk2f/SamInvokeCdk/cdk.out/TmpWxaZf33UfiStack.template.json
2022-03-15 15:37:46,529 | --base-dir is not presented, adjusting uri . relative to /tmp/tmp.k8vPX9Fk2f/SamInvokeCdk/cdk.out/TmpWxaZf33UfiStack.template.json
2022-03-15 15:37:46,535 | Found one Lambda function with name 'customImageLambdaECCCB1E0'
2022-03-15 15:37:46,535 | Invoking Container created from customimagelambdaecccb1e0
2022-03-15 15:37:46,535 | No environment variables found for function 'customImageLambdaECCCB1E0'
2022-03-15 15:37:46,535 | Environment variables overrides data is standard format
2022-03-15 15:37:46,535 | Loading AWS credentials from session with profile 'greening-web'
2022-03-15 15:37:46,550 | Code None is not a zip/jar file
2022-03-15 15:37:46,553 | Image was not found.
2022-03-15 15:37:46,553 | Removing rapid images for repo customimagelambdaecccb1e0
Building image......
2022-03-15 15:37:51,379 | Failed to build Docker Image
NoneType: None
2022-03-15 15:37:51,386 | Cleaning all decompressed code dirs
2022-03-15 15:37:51,388 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '4974b80d-94bc-41a2-8a12-137b62fc5db1', 'installationId': '6cecf8c4-5ccf-40de-af8c-f21352dd45f9', 'sessionId': '4af652a7-2240-4d99-9624-c51d67bc8e4a', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.7.10', 'samcliVersion': '1.40.1', 'awsProfileProvided': True, 'debugFlagProvided': True, 'region': 'us-east-1', 'commandName': 'sam local invoke', 'metricSpecificAttributes': {'projectType': 'CDK'}, 'duration': 4904, 'exitReason': 'ImageBuildException', 'exitCode': 1}}]}
2022-03-15 15:37:52,322 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
Error: Error building docker image: pull access denied for customimagelambdaecccb1e0, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
```
### Expected result:
The lambda to be invoked
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Ubuntu 21.04
2. If using SAM CLI, `sam --version`: 1.40.1
3. AWS region: `us-east-1`
| True | Error: Error building docker image: pull access denied for LOGICAL_NAME, when using CDK - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description:
I have a lambda function that is defined using CDK. However, when I `aws sam invoke`, according to the documentation, I get the above error.
### Steps to reproduce:
I have created a git repository that lets you reproduce this bug: https://github.com/multimeric/SamInvokeCdk
Run the following commands:
```bash
git clone git@github.com:multimeric/SamInvokeCdk.git
cd SamInvokeCdk
npm install
cdk synth
sam local invoke customImageLambdaECCCB1E0 -t cdk.out/TmpWxaZf33UfiStack.template.json
```
### Observed result:
```bash
$ sam local invoke customImageLambdaECCCB1E0 -t cdk.out/TmpWxaZf33UfiStack.template.json --profile greening-web --region us-east-1 --debug
2022-03-15 15:37:46,483 | Telemetry endpoint configured to be https://aws-serverless-tools-telemetry.us-west-2.amazonaws.com/metrics
2022-03-15 15:37:46,484 | Using config file: samconfig.toml, config environment: default
2022-03-15 15:37:46,484 | Expand command line arguments to:
2022-03-15 15:37:46,484 | --template_file=/tmp/tmp.k8vPX9Fk2f/SamInvokeCdk/cdk.out/TmpWxaZf33UfiStack.template.json --function_logical_id=customImageLambdaECCCB1E0 --no_event --layer_cache_basedir=/home/migwell/.aws-sam/layers-pkg --container_host=localhost --container_host_interface=127.0.0.1
2022-03-15 15:37:46,484 | local invoke command is called
2022-03-15 15:37:46,484 | Collected default values for parameters: {'BootstrapVersion': '/cdk-bootstrap/hnb659fds/version'}
2022-03-15 15:37:46,497 | CDK Path for resource customImageLambdaServiceRole02CD8460 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'ServiceRole', 'Resource']
2022-03-15 15:37:46,497 | CDK Path for resource customImageLambdaECCCB1E0 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'Resource']
2022-03-15 15:37:46,497 | CDK Path for resource CDKMetadata is ['TmpWxaZf33UfiStack', 'CDKMetadata', 'Default']
2022-03-15 15:37:46,498 | 3 stacks found in the template
2022-03-15 15:37:46,498 | Collected default values for parameters: {'BootstrapVersion': '/cdk-bootstrap/hnb659fds/version'}
2022-03-15 15:37:46,512 | CDK Path for resource customImageLambdaServiceRole02CD8460 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'ServiceRole', 'Resource']
2022-03-15 15:37:46,512 | CDK Path for resource customImageLambdaECCCB1E0 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'Resource']
2022-03-15 15:37:46,512 | CDK Path for resource CDKMetadata is ['TmpWxaZf33UfiStack', 'CDKMetadata', 'Default']
2022-03-15 15:37:46,513 | 3 resources found in the stack
2022-03-15 15:37:46,513 | Collected default values for parameters: {'BootstrapVersion': '/cdk-bootstrap/hnb659fds/version'}
2022-03-15 15:37:46,527 | CDK Path for resource customImageLambdaServiceRole02CD8460 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'ServiceRole', 'Resource']
2022-03-15 15:37:46,527 | CDK Path for resource customImageLambdaECCCB1E0 is ['TmpWxaZf33UfiStack', 'customImageLambda', 'Resource']
2022-03-15 15:37:46,527 | CDK Path for resource CDKMetadata is ['TmpWxaZf33UfiStack', 'CDKMetadata', 'Default']
2022-03-15 15:37:46,529 | Found Lambda function with name='customImageLambdaECCCB1E0' and Imageuri='customimagelambdaecccb1e0'
2022-03-15 15:37:46,529 | --base-dir is not presented, adjusting uri asset.8ef90e23f9c95edd96726ef3e331815e9cc7076c2317f2b4b370470a8ee84994 relative to /tmp/tmp.k8vPX9Fk2f/SamInvokeCdk/cdk.out/TmpWxaZf33UfiStack.template.json
2022-03-15 15:37:46,529 | --base-dir is not presented, adjusting uri . relative to /tmp/tmp.k8vPX9Fk2f/SamInvokeCdk/cdk.out/TmpWxaZf33UfiStack.template.json
2022-03-15 15:37:46,535 | Found one Lambda function with name 'customImageLambdaECCCB1E0'
2022-03-15 15:37:46,535 | Invoking Container created from customimagelambdaecccb1e0
2022-03-15 15:37:46,535 | No environment variables found for function 'customImageLambdaECCCB1E0'
2022-03-15 15:37:46,535 | Environment variables overrides data is standard format
2022-03-15 15:37:46,535 | Loading AWS credentials from session with profile 'greening-web'
2022-03-15 15:37:46,550 | Code None is not a zip/jar file
2022-03-15 15:37:46,553 | Image was not found.
2022-03-15 15:37:46,553 | Removing rapid images for repo customimagelambdaecccb1e0
Building image......
2022-03-15 15:37:51,379 | Failed to build Docker Image
NoneType: None
2022-03-15 15:37:51,386 | Cleaning all decompressed code dirs
2022-03-15 15:37:51,388 | Sending Telemetry: {'metrics': [{'commandRun': {'requestId': '4974b80d-94bc-41a2-8a12-137b62fc5db1', 'installationId': '6cecf8c4-5ccf-40de-af8c-f21352dd45f9', 'sessionId': '4af652a7-2240-4d99-9624-c51d67bc8e4a', 'executionEnvironment': 'CLI', 'ci': False, 'pyversion': '3.7.10', 'samcliVersion': '1.40.1', 'awsProfileProvided': True, 'debugFlagProvided': True, 'region': 'us-east-1', 'commandName': 'sam local invoke', 'metricSpecificAttributes': {'projectType': 'CDK'}, 'duration': 4904, 'exitReason': 'ImageBuildException', 'exitCode': 1}}]}
2022-03-15 15:37:52,322 | HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
Error: Error building docker image: pull access denied for customimagelambdaecccb1e0, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
```
### Expected result:
The lambda to be invoked
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Ubuntu 21.04
2. If using SAM CLI, `sam --version`: 1.40.1
3. AWS region: `us-east-1`
| main | error error building docker image pull access denied for logical name when using cdk make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description i have a lambda function that is defined using cdk however when i aws sam invoke according to the documentation i get the above error steps to reproduce i have created a git repository that lets you reproduce this bug run the following commands bash git clone git github com multimeric saminvokecdk git cd saminvokecdk npm install cdk synth sam local invoke t cdk out template json observed result bash sam local invoke t cdk out template json profile greening web region us east debug telemetry endpoint configured to be using config file samconfig toml config environment default expand command line arguments to template file tmp tmp saminvokecdk cdk out template json function logical id no event layer cache basedir home migwell aws sam layers pkg container host localhost container host interface local invoke command is called collected default values for parameters bootstrapversion cdk bootstrap version cdk path for resource is cdk path for resource is cdk path for resource cdkmetadata is stacks found in the template collected default values for parameters bootstrapversion cdk bootstrap version cdk path for resource is cdk path for resource is cdk path for resource cdkmetadata is resources found in the stack collected default values for parameters bootstrapversion cdk bootstrap version cdk path for resource is cdk path for resource is cdk path for resource cdkmetadata is found lambda function with name and imageuri base dir is not presented adjusting uri asset relative to tmp tmp saminvokecdk cdk out template json base dir is not presented adjusting uri relative to tmp tmp saminvokecdk cdk out template json found one lambda function with name invoking container created from no environment variables found for function environment variables overrides data is standard format loading aws credentials from session with profile greening web code none is not a zip jar file image was not found removing rapid images for repo building image failed to build docker image nonetype none cleaning all decompressed code dirs sending telemetry metrics httpsconnectionpool host aws serverless tools telemetry us west amazonaws com port read timed out read timeout error error building docker image pull access denied for repository does not exist or may require docker login denied requested access to the resource is denied expected result the lambda to be invoked additional environment details ex windows mac amazon linux etc os ubuntu if using sam cli sam version aws region us east | 1 |
4,172 | 19,985,500,993 | IssuesEvent | 2022-01-30 15:50:06 | BioArchLinux/Packages | https://api.github.com/repos/BioArchLinux/Packages | opened | [MAINTAIN] r-rgin | maintain | <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
```
gcc -I"/usr/include/R/" -DNDEBUG -I'/usr/lib/R/library/RcppEigen/include' -D_FORTIFY_SOURCE=2 -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c Rgin-init.c -o Rgin-init.o
g++ -std=gnu++14 -I"/usr/include/R/" -DNDEBUG -I'/usr/lib/R/library/RcppEigen/include' -D_FORTIFY_SOURCE=2 -std=c++11 -fopenmp -D_USE_KNETFILE -D_FILE_OFFSET_BITS=64 -U_FORTIFY_SOURCE -DBGZF_CACHE -DAS_GINLIB -DAS_RGINLIB -I./include -I./lib `/usr/lib64/R/bin/Rscript -e "Rcpp:::CxxFlags()"` -lz -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c src/feature_selection/feature_selector.cc -o src/feature_selection/feature_selector.o
g++ -std=gnu++14 -I"/usr/include/R/" -DNDEBUG -I'/usr/lib/R/library/RcppEigen/include' -D_FORTIFY_SOURCE=2 -std=c++11 -fopenmp -D_USE_KNETFILE -D_FILE_OFFSET_BITS=64 -U_FORTIFY_SOURCE -DBGZF_CACHE -DAS_GINLIB -DAS_RGINLIB -I./include -I./lib `/usr/lib64/R/bin/Rscript -e "Rcpp:::CxxFlags()"` -lz -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c src/feature_selection/scones.cc -o src/feature_selection/scones.o
In file included from ./include/gin/feature_selection/scones.h:8,
from src/feature_selection/feature_selector.cc:9:
./include/gin/globals.h:60:10: fatal error: Rcpp.h: No such file or directory
60 | #include "Rcpp.h"
| ^~~~~~~~
compilation terminated.
In file included from ./include/gin/feature_selection/scones.h:8,
from src/feature_selection/scones.cc:5:
./include/gin/globals.h:60:10: fatal error: Rcpp.h: No such file or directory
60 | #include "Rcpp.h"
| ^~~~~~~~
compilation terminated.
make: *** [/usr/lib64/R/etc/Makeconf:175: src/feature_selection/feature_selector.o] Error 1
make: *** Waiting for unfinished jobs....
make: *** [/usr/lib64/R/etc/Makeconf:175: src/feature_selection/scones.o] Error 1
```
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
https://log.bioarchlinux.org/2022-01-28T13%3A17%3A39/r-rgin.log
| True | [MAINTAIN] r-rgin - <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
```
gcc -I"/usr/include/R/" -DNDEBUG -I'/usr/lib/R/library/RcppEigen/include' -D_FORTIFY_SOURCE=2 -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c Rgin-init.c -o Rgin-init.o
g++ -std=gnu++14 -I"/usr/include/R/" -DNDEBUG -I'/usr/lib/R/library/RcppEigen/include' -D_FORTIFY_SOURCE=2 -std=c++11 -fopenmp -D_USE_KNETFILE -D_FILE_OFFSET_BITS=64 -U_FORTIFY_SOURCE -DBGZF_CACHE -DAS_GINLIB -DAS_RGINLIB -I./include -I./lib `/usr/lib64/R/bin/Rscript -e "Rcpp:::CxxFlags()"` -lz -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c src/feature_selection/feature_selector.cc -o src/feature_selection/feature_selector.o
g++ -std=gnu++14 -I"/usr/include/R/" -DNDEBUG -I'/usr/lib/R/library/RcppEigen/include' -D_FORTIFY_SOURCE=2 -std=c++11 -fopenmp -D_USE_KNETFILE -D_FILE_OFFSET_BITS=64 -U_FORTIFY_SOURCE -DBGZF_CACHE -DAS_GINLIB -DAS_RGINLIB -I./include -I./lib `/usr/lib64/R/bin/Rscript -e "Rcpp:::CxxFlags()"` -lz -fpic -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -c src/feature_selection/scones.cc -o src/feature_selection/scones.o
In file included from ./include/gin/feature_selection/scones.h:8,
from src/feature_selection/feature_selector.cc:9:
./include/gin/globals.h:60:10: fatal error: Rcpp.h: No such file or directory
60 | #include "Rcpp.h"
| ^~~~~~~~
compilation terminated.
In file included from ./include/gin/feature_selection/scones.h:8,
from src/feature_selection/scones.cc:5:
./include/gin/globals.h:60:10: fatal error: Rcpp.h: No such file or directory
60 | #include "Rcpp.h"
| ^~~~~~~~
compilation terminated.
make: *** [/usr/lib64/R/etc/Makeconf:175: src/feature_selection/feature_selector.o] Error 1
make: *** Waiting for unfinished jobs....
make: *** [/usr/lib64/R/etc/Makeconf:175: src/feature_selection/scones.o] Error 1
```
</details>
**Packages (please complete the following information):**
- Package Name: [e.g. iqtree]
**Description**
https://log.bioarchlinux.org/2022-01-28T13%3A17%3A39/r-rgin.log
| main | r rgin please report the error of one package in one issue use multi issues to report multi bugs thanks log of the bug gcc i usr include r dndebug i usr lib r library rcppeigen include d fortify source fpic march mtune generic pipe fno plt c rgin init c o rgin init o g std gnu i usr include r dndebug i usr lib r library rcppeigen include d fortify source std c fopenmp d use knetfile d file offset bits u fortify source dbgzf cache das ginlib das rginlib i include i lib usr r bin rscript e rcpp cxxflags lz fpic march mtune generic pipe fno plt c src feature selection feature selector cc o src feature selection feature selector o g std gnu i usr include r dndebug i usr lib r library rcppeigen include d fortify source std c fopenmp d use knetfile d file offset bits u fortify source dbgzf cache das ginlib das rginlib i include i lib usr r bin rscript e rcpp cxxflags lz fpic march mtune generic pipe fno plt c src feature selection scones cc o src feature selection scones o in file included from include gin feature selection scones h from src feature selection feature selector cc include gin globals h fatal error rcpp h no such file or directory include rcpp h compilation terminated in file included from include gin feature selection scones h from src feature selection scones cc include gin globals h fatal error rcpp h no such file or directory include rcpp h compilation terminated make error make waiting for unfinished jobs make error packages please complete the following information package name description | 1 |
63,401 | 3,194,766,003 | IssuesEvent | 2015-09-30 13:51:56 | fusioninventory/fusioninventory-for-glpi | https://api.github.com/repos/fusioninventory/fusioninventory-for-glpi | closed | Fix entity transfers on packages and mirrors | Category: Deploy Priority: High Status: Closed Tracker: Tasks | ---
Author Name: **Kevin Roy** (@kiniou)
Original Redmine Issue: 2066, http://forge.fusioninventory.org/issues/2066
Original Date: 2011-12-10
Original Assignee: David Durieux
---
Actually, there is no way to transfer a package or a mirror from an entity to another
| 1.0 | Fix entity transfers on packages and mirrors - ---
Author Name: **Kevin Roy** (@kiniou)
Original Redmine Issue: 2066, http://forge.fusioninventory.org/issues/2066
Original Date: 2011-12-10
Original Assignee: David Durieux
---
Actually, there is no way to transfer a package or a mirror from an entity to another
| non_main | fix entity transfers on packages and mirrors author name kevin roy kiniou original redmine issue original date original assignee david durieux actually there is no way to transfer a package or a mirror from an entity to another | 0 |
28,868 | 5,399,760,935 | IssuesEvent | 2017-02-27 20:16:21 | canadainc/sunnah10 | https://api.github.com/repos/canadainc/sunnah10 | closed | Add tab for lectures | auto-migrated invalid Priority-Medium Type-Defect | ```
Add tab for some of the lectures from various sources.
```
Original issue reported on code.google.com by `canadai...@gmail.com` on 25 Jul 2013 at 7:06
| 1.0 | Add tab for lectures - ```
Add tab for some of the lectures from various sources.
```
Original issue reported on code.google.com by `canadai...@gmail.com` on 25 Jul 2013 at 7:06
| non_main | add tab for lectures add tab for some of the lectures from various sources original issue reported on code google com by canadai gmail com on jul at | 0 |
4,299 | 21,672,517,274 | IssuesEvent | 2022-05-08 07:11:23 | svengreb/tmpl-go | https://api.github.com/repos/svengreb/tmpl-go | opened | Update to `tmpl` template repository version `0.11.0` | type-improvement context-techstack scope-compatibility scope-maintainability | Update to [`tmpl` version `0.11.0`][1] which comes with…
1. [an opt-in Dependabot version update configuration][2] — this will remove the currently used [`.guthub/dependabot.yml` file][3] in order to remove the PR noise and reduce the maintenance overhead. Dependency updates will be made by keeping up-to-date with new `tmpl` repository versions instead that takes care of this.
[1]: https://github.com/svengreb/tmpl/releases/tag/v0.11.0
[2]: https://github.com/svengreb/tmpl/issues/94
[3]: https://github.com/svengreb/tmpl-go/blob/39cf0b85/.github/dependabot.yml
| True | Update to `tmpl` template repository version `0.11.0` - Update to [`tmpl` version `0.11.0`][1] which comes with…
1. [an opt-in Dependabot version update configuration][2] — this will remove the currently used [`.guthub/dependabot.yml` file][3] in order to remove the PR noise and reduce the maintenance overhead. Dependency updates will be made by keeping up-to-date with new `tmpl` repository versions instead that takes care of this.
[1]: https://github.com/svengreb/tmpl/releases/tag/v0.11.0
[2]: https://github.com/svengreb/tmpl/issues/94
[3]: https://github.com/svengreb/tmpl-go/blob/39cf0b85/.github/dependabot.yml
| main | update to tmpl template repository version update to which comes with… — this will remove the currently used in order to remove the pr noise and reduce the maintenance overhead dependency updates will be made by keeping up to date with new tmpl repository versions instead that takes care of this | 1 |
2,994 | 10,881,120,109 | IssuesEvent | 2019-11-17 15:44:08 | lrozenblyum/chess | https://api.github.com/repos/lrozenblyum/chess | closed | Drop standalone SonarQube support | CI maintainability | It hasn't been used for a long time. Superceded by SonarCloud usage | True | Drop standalone SonarQube support - It hasn't been used for a long time. Superceded by SonarCloud usage | main | drop standalone sonarqube support it hasn t been used for a long time superceded by sonarcloud usage | 1 |
232,572 | 17,786,783,014 | IssuesEvent | 2021-08-31 12:04:36 | Musikhjemmeside/Team-Website | https://api.github.com/repos/Musikhjemmeside/Team-Website | closed | Write an article about your self | documentation | every member is required too write and article about them self. | 1.0 | Write an article about your self - every member is required too write and article about them self. | non_main | write an article about your self every member is required too write and article about them self | 0 |
435,548 | 30,506,819,147 | IssuesEvent | 2023-07-18 17:31:06 | Sue419/DEV007-md-links | https://api.github.com/repos/Sue419/DEV007-md-links | closed | Planificación | documentation | - [x] Diagrama de flujo
- [x] Boilerplate
- [x] Configuración de dependencias
- [x] Axios, path
- [x] chalk
- [x] HTTP, file system | 1.0 | Planificación - - [x] Diagrama de flujo
- [x] Boilerplate
- [x] Configuración de dependencias
- [x] Axios, path
- [x] chalk
- [x] HTTP, file system | non_main | planificación diagrama de flujo boilerplate configuración de dependencias axios path chalk http file system | 0 |
76,433 | 26,423,483,722 | IssuesEvent | 2023-01-13 23:39:50 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | opened | primefaces.MOVE_SCRIPTS_TO_BOTTOM = true causes Javascript Error when <ui:include/> is present | :lady_beetle: defect :bangbang: needs-triage | ### Describe the bug
Set web.xml with:
```
<context-param>
<param-name>primefaces.MOVE_SCRIPTS_TO_BOTTOM</param-name>
<param-value>true</param-value>
</context-param>
```
Causes JS error when using **<ui:include />** in pages that have Primefaces components like `<p:selectOneMenu/> or <p:calendar/> `(<h:selectOneMenu/> are not affected for example).

### Reproducer
https://github.com/edudoda/primefaces-test-cases
### Expected behavior
Components like `<p:selectOneMenu/> or <p:calendar/> ` should work fine.
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
All
### JSF version
2.2
### Java version
8
### Browser(s)
_No response_ | 1.0 | primefaces.MOVE_SCRIPTS_TO_BOTTOM = true causes Javascript Error when <ui:include/> is present - ### Describe the bug
Set web.xml with:
```
<context-param>
<param-name>primefaces.MOVE_SCRIPTS_TO_BOTTOM</param-name>
<param-value>true</param-value>
</context-param>
```
Causes JS error when using **<ui:include />** in pages that have Primefaces components like `<p:selectOneMenu/> or <p:calendar/> `(<h:selectOneMenu/> are not affected for example).

### Reproducer
https://github.com/edudoda/primefaces-test-cases
### Expected behavior
Components like `<p:selectOneMenu/> or <p:calendar/> ` should work fine.
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
All
### JSF version
2.2
### Java version
8
### Browser(s)
_No response_ | non_main | primefaces move scripts to bottom true causes javascript error when is present describe the bug set web xml with primefaces move scripts to bottom true causes js error when using in pages that have primefaces components like or are not affected for example reproducer expected behavior components like or should work fine primefaces edition community primefaces version theme no response jsf implementation all jsf version java version browser s no response | 0 |
17,898 | 12,415,470,470 | IssuesEvent | 2020-05-22 16:21:13 | wellcometrust/reach | https://api.github.com/repos/wellcometrust/reach | closed | 'Zebra' rows on results table | Usability | As a **user** I need to **be able to quickly and easily scan results**
So that I can **see which results are relevant**
Best seen in the new designs here:
https://zpl.io/bApoNJx | True | 'Zebra' rows on results table - As a **user** I need to **be able to quickly and easily scan results**
So that I can **see which results are relevant**
Best seen in the new designs here:
https://zpl.io/bApoNJx | non_main | zebra rows on results table as a user i need to be able to quickly and easily scan results so that i can see which results are relevant best seen in the new designs here | 0 |
52,867 | 27,808,766,158 | IssuesEvent | 2023-03-17 23:31:14 | magit/magit | https://api.github.com/repos/magit/magit | closed | Performance issue moving in the magit-diff buffer | area: performance | Hi:
When using magit and moving cursor on the `magit-diff` buffer emacs takes several seconds just to move the cursor (~5-10 s just to move cursor from one point to the next).
I profiled and these were the results:
```
1 100% - ...
1 100% - eq
1 100% - magit-diff-type
1 100% - let
1 100% - if
1 100% - progn
1 100% - cond
1 100% - let
1 100% - cond
1 100% - or
1 100% - magit-rev-eq
1 100% - magit-commit-p
1 100% - magit-rev-verify
1 100% - magit-git-string-p
1 100% - magit-process-git
1 100% - magit-process-file
1 100% process-file
0 0% Automatic GC
```
It seems like magit calls `process-file` in `magit-section-post-command-hook` throw `magit-section-update-highlight`
So, is it really needed to call something "expensive" like `process-file` on every move of the cursor in magit-diff? This is very bad specially in Tramp and MS-windows.
Is there some hook I could use to stop this?
In case it is useful here I attach my magit config:
```
(use-package magit :defer t
:init
(setq-default magit-git-executable (executable-find "git")
magit-define-global-key-bindings nil
magit-display-buffer-function #'magit-display-buffer-same-window-except-diff-v1
;; This may help for tramp
auto-revert-buffer-list-filter 'magit-auto-revert-repository-buffer-p
magit-git-debug t
magit-process-display-mode-line-error nil ;; magit errors in modeline
;; This may help on MS_Windows
magit-refresh-status-buffer nil
magit-diff-highlight-indentation nil
magit-revision-insert-related-refs nil
magit-diff-highlight-trailing nil
magit-diff-paint-whitespace nil
magit-diff-highlight-hunk-body nil
magit-diff-refine-hunk nil
)
:config
(keymap-unset magit-section-mode-map "C-<tab>" t) ;; magit-section-cycle shadows tab next
(add-hook 'magit-log-mode-hook (lambda nil
(setq-local show-trailing-whitespace nil
tab-width 4)))
(remove-hook 'magit-refs-sections-hook 'magit-insert-tags))
```
In the config you can see I already set multiple options to `nil`, because the magit performance on ms-windows was just terrible (starting windows processes is terribly slow and MS doesn't care... but then you add antivirus and things get much worst).
However... I don't find any information in the documentation for this one.
Thanks in advance,
Ergus | True | Performance issue moving in the magit-diff buffer - Hi:
When using magit and moving cursor on the `magit-diff` buffer emacs takes several seconds just to move the cursor (~5-10 s just to move cursor from one point to the next).
I profiled and these were the results:
```
1 100% - ...
1 100% - eq
1 100% - magit-diff-type
1 100% - let
1 100% - if
1 100% - progn
1 100% - cond
1 100% - let
1 100% - cond
1 100% - or
1 100% - magit-rev-eq
1 100% - magit-commit-p
1 100% - magit-rev-verify
1 100% - magit-git-string-p
1 100% - magit-process-git
1 100% - magit-process-file
1 100% process-file
0 0% Automatic GC
```
It seems like magit calls `process-file` in `magit-section-post-command-hook` throw `magit-section-update-highlight`
So, is it really needed to call something "expensive" like `process-file` on every move of the cursor in magit-diff? This is very bad specially in Tramp and MS-windows.
Is there some hook I could use to stop this?
In case it is useful here I attach my magit config:
```
(use-package magit :defer t
:init
(setq-default magit-git-executable (executable-find "git")
magit-define-global-key-bindings nil
magit-display-buffer-function #'magit-display-buffer-same-window-except-diff-v1
;; This may help for tramp
auto-revert-buffer-list-filter 'magit-auto-revert-repository-buffer-p
magit-git-debug t
magit-process-display-mode-line-error nil ;; magit errors in modeline
;; This may help on MS_Windows
magit-refresh-status-buffer nil
magit-diff-highlight-indentation nil
magit-revision-insert-related-refs nil
magit-diff-highlight-trailing nil
magit-diff-paint-whitespace nil
magit-diff-highlight-hunk-body nil
magit-diff-refine-hunk nil
)
:config
(keymap-unset magit-section-mode-map "C-<tab>" t) ;; magit-section-cycle shadows tab next
(add-hook 'magit-log-mode-hook (lambda nil
(setq-local show-trailing-whitespace nil
tab-width 4)))
(remove-hook 'magit-refs-sections-hook 'magit-insert-tags))
```
In the config you can see I already set multiple options to `nil`, because the magit performance on ms-windows was just terrible (starting windows processes is terribly slow and MS doesn't care... but then you add antivirus and things get much worst).
However... I don't find any information in the documentation for this one.
Thanks in advance,
Ergus | non_main | performance issue moving in the magit diff buffer hi when using magit and moving cursor on the magit diff buffer emacs takes several seconds just to move the cursor s just to move cursor from one point to the next i profiled and these were the results eq magit diff type let if progn cond let cond or magit rev eq magit commit p magit rev verify magit git string p magit process git magit process file process file automatic gc it seems like magit calls process file in magit section post command hook throw magit section update highlight so is it really needed to call something expensive like process file on every move of the cursor in magit diff this is very bad specially in tramp and ms windows is there some hook i could use to stop this in case it is useful here i attach my magit config use package magit defer t init setq default magit git executable executable find git magit define global key bindings nil magit display buffer function magit display buffer same window except diff this may help for tramp auto revert buffer list filter magit auto revert repository buffer p magit git debug t magit process display mode line error nil magit errors in modeline this may help on ms windows magit refresh status buffer nil magit diff highlight indentation nil magit revision insert related refs nil magit diff highlight trailing nil magit diff paint whitespace nil magit diff highlight hunk body nil magit diff refine hunk nil config keymap unset magit section mode map c t magit section cycle shadows tab next add hook magit log mode hook lambda nil setq local show trailing whitespace nil tab width remove hook magit refs sections hook magit insert tags in the config you can see i already set multiple options to nil because the magit performance on ms windows was just terrible starting windows processes is terribly slow and ms doesn t care but then you add antivirus and things get much worst however i don t find any information in the documentation for this one thanks in advance ergus | 0 |
859 | 4,531,926,550 | IssuesEvent | 2016-09-08 05:40:32 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Cannot request certificate (letsencrypt) with error message "Error validating challenge: ..." | bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
letsencrypt
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Laptop (Linux Manjaro) execute Ansible playbook against CentOS 7 server
##### SUMMARY
<!--- Explain the problem briefly -->
Cannot request certificate (letsencrypt) with error message "Error validating challenge: ..."
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Follow the example of letsencrypt at https://docs.ansible.com/ansible/letsencrypt_module.html#requirements-on-host-that-executes-module
- step 1. generate RSA key and CSR at my laptop (Manjaro Linux) and copy to the CentOS 7 server
- step 2. check nginx config in CentOS server to guarantee that we can retrieve challenge file at path like http://mydomain.com/.well-known/acme-challenge/-EJuzZ8j_qy8g8zB9B4C9Zy2jXEAqwyGoSSB819Rxvc
- step 3. run ansible example script against that server
```
$ ansible-playbook -i inventory -u root bases.yml -v
```
> RESULT: certificate issuing failed with error message "Error validating challenge ......" (see below for full log)
<!--- Paste example playbooks or commands between quotes below -->
```
- name: install certificates for Test Zabbix
letsencrypt:
account_key: /path/to/mydomain.com/mydomain.com.key
csr: /path/to/mydomain.com/mydomain.com.csr
dest: /path/to/mydomain.com/mydomain.com.crt
register: mydomain_com_challenge
# perform the necessary steps to fulfill the challenge for example:
- copy:
dest: /var/www/html/{{ mydomain_com_challenge['challenge_data']['mydomain.com']['http-01']['resource'] }}
content: "{{ mydomain_com_challenge['challenge_data']['mydomain.com']['http-01']['resource_value'] }}"
when: mydomain_com_challenge|changed
- name: install certificates for Zabbix
letsencrypt:
account_key: /path/to/mydomain.com/mydomain.com.key
csr: /path/to/mydomain.com/mydomain.com.csr
dest: /path/to/mydomain.com/mydomain.com.crt
data: "{{ mydomain_com_challenge }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect the certificate issuing succeed (even though the default URL of letsencrypt is staging server) according to example described in Ansible official document.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [ansible-letsencrypt : install certificates for Test Zabbix] **************
changed: [63.142.XX.YY] => {"authorizations": [{"challenges": [{"status": "pending", "token": "a6D8LriXpYFWBqPCCR9TLYcLhkuX78TTwDrwUWt_cD0", "type": "tls-sni-01", "uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/x2O9olBftqYw-1qfR2wXDyK-tTaDhkAOjq8qtyyXGH4/11844425"}, {"status": "pending", "token": "-EJuzZ8j_qy8g8zB9B4C9Zy2jXEAqwyGoSSB819Rxvc", "type": "http-01", "uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/x2O9olBftqYw-1qfR2wXDyK-tTaDhkAOjq8qtyyXGH4/11844426"}, {"status": "pending", "token": "5n9Z5muezqi911qAPPChA7rnnwxSPqcqphRJSnO7qXo", "type": "dns-01", "uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/x2O9olBftqYw-1qfR2wXDyK-tTaDhkAOjq8qtyyXGH4/11844427"}], "combinations": [[0], [1], [2]], "expires": "2016-09-03T07:39:24.419769206Z", "identifier": {"type": "dns", "value": "mydomain.com"}, "status": "pending", "uri": "https://acme-staging.api.letsencrypt.org/acme/authz/x2O9olBftqYw-1qfR2wXDyK-tTaDhkAOjq8qtyyXGH4"}], "cert_days": -1, "challenge_data": {"mydomain.com": {"dns-01": {"resource": "_acme-challenge", "resource_value": "ECW-zJ-oqqa0NrZwtpNNNFDoOrAX2HkpD5v1FhS0KI8"}, "http-01": {"resource": ".well-known/acme-challenge/-EJuzZ8j_qy8g8zB9B4C9Zy2jXEAqwyGoSSB819Rxvc", "resource_value": "-EJuzZ8j_qy8g8zB9B4C9Zy2jXEAqwyGoSSB819Rxvc.qyzD-ebStixuyLGMoFX7gLRCsylHqV5tZEOuzBX8bec"}}}, "changed": true}
TASK [ansible-letsencrypt : copy] **********************************************
changed: [63.142.XX.YY] => {"changed": true, "checksum": "2f898e335a26d8ba37ff9bd3d51e8bb25e85472b", "dest": "/var/www/html/.well-known/acme-challenge/-EJuzZ8j_qy8g8zB9B4C9Zy2jXEAqwyGoSSB819Rxvc", "gid": 0, "group": "root", "md5sum": "111c6531ebea8561bec0f77389fe921b", "mode": "0644", "owner": "root", "size": 87, "src": "/root/.ansible/tmp/ansible-tmp-1472283569.85-125669089251342/source", "state": "file", "uid": 0}
TASK [ansible-letsencrypt : install certificates for Zabbix] *******************
fatal: [63.142.XX.YY]: FAILED! => {"changed": false, "failed": true, "msg": "Error validating challenge: CODE: ************************ RESULT: {u'status': u'********', u'keyAuthorization': u'********', u'token': u'-EJuzZ8j_qy8g8zB9B4C9Zy********jXEAqwyGoSSB8********9Rxvc', u'type': u'http-****************', u'uri': u'********'}"}
to retry, use: --limit @bases.retry
```
By executing the script again, I received another error (this times, I used high verbosity -vvv)
```
TASK [ansible-letsencrypt : install certificates for Test Zabbix] **************
task path: /home/my_user/MY_USER/path/to/ansible-letsencrypt/tasks/main.yml:50
Using module file /usr/lib/python2.7/site-packages/ansible/modules/extras/web_infrastructure/letsencrypt.py
<63.142.XX.YY> ESTABLISH SSH CONNECTION FOR USER: root
<63.142.XX.YY> SSH: EXEC ssh -q -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/my_user/.ansible/cp/ansible-ssh-%h-%p-%r 63.142.XX.YY '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373 `" && echo ansible-tmp-1472285560.26-149187906418373="` echo $HOME/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373 `" ) && sleep 0'"'"''
<63.142.XX.YY> PUT /tmp/tmpOjTggd TO /root/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373/letsencrypt.py
<63.142.XX.YY> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/my_user/.ansible/cp/ansible-ssh-%h-%p-%r '[63.142.XX.YY]'
<63.142.XX.YY> ESTABLISH SSH CONNECTION FOR USER: root
<63.142.XX.YY> SSH: EXEC ssh -q -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/my_user/.ansible/cp/ansible-ssh-%h-%p-%r 63.142.XX.YY '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373/ /root/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373/letsencrypt.py && sleep 0'"'"''
<63.142.XX.YY> ESTABLISH SSH CONNECTION FOR USER: root
<63.142.XX.YY> SSH: EXEC ssh -q -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/my_user/.ansible/cp/ansible-ssh-%h-%p-%r -tt 63.142.XX.YY '/bin/sh -c '"'"'/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373/letsencrypt.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373/" > /dev/null 2>&1 && sleep 0'"'"''
fatal: [63.142.XX.YY]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"account_email": null,
"account_key": "/path/to/mydomain.com.key",
"acme_directory": "https://acme-staging.api.letsencrypt.org/directory",
"agreement": "https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf",
"challenge": "http-01",
"csr": "/path/to/mydomain.com.csr",
"data": null,
"dest": "/path/to/mydomain.com.crt",
"remaining_days": 10
},
"module_name": "letsencrypt"
},
"msg": "Error new cert: CODE: 400 RESULT: None"
}
to retry, use: --limit @bases.retry
PLAY RECAP *********************************************************************
```
| True | Cannot request certificate (letsencrypt) with error message "Error validating challenge: ..." - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
letsencrypt
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Laptop (Linux Manjaro) execute Ansible playbook against CentOS 7 server
##### SUMMARY
<!--- Explain the problem briefly -->
Cannot request certificate (letsencrypt) with error message "Error validating challenge: ..."
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Follow the example of letsencrypt at https://docs.ansible.com/ansible/letsencrypt_module.html#requirements-on-host-that-executes-module
- step 1. generate RSA key and CSR at my laptop (Manjaro Linux) and copy to the CentOS 7 server
- step 2. check nginx config in CentOS server to guarantee that we can retrieve challenge file at path like http://mydomain.com/.well-known/acme-challenge/-EJuzZ8j_qy8g8zB9B4C9Zy2jXEAqwyGoSSB819Rxvc
- step 3. run ansible example script against that server
```
$ ansible-playbook -i inventory -u root bases.yml -v
```
> RESULT: certificate issuing failed with error message "Error validating challenge ......" (see below for full log)
<!--- Paste example playbooks or commands between quotes below -->
```
- name: install certificates for Test Zabbix
letsencrypt:
account_key: /path/to/mydomain.com/mydomain.com.key
csr: /path/to/mydomain.com/mydomain.com.csr
dest: /path/to/mydomain.com/mydomain.com.crt
register: mydomain_com_challenge
# perform the necessary steps to fulfill the challenge for example:
- copy:
dest: /var/www/html/{{ mydomain_com_challenge['challenge_data']['mydomain.com']['http-01']['resource'] }}
content: "{{ mydomain_com_challenge['challenge_data']['mydomain.com']['http-01']['resource_value'] }}"
when: mydomain_com_challenge|changed
- name: install certificates for Zabbix
letsencrypt:
account_key: /path/to/mydomain.com/mydomain.com.key
csr: /path/to/mydomain.com/mydomain.com.csr
dest: /path/to/mydomain.com/mydomain.com.crt
data: "{{ mydomain_com_challenge }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect the certificate issuing succeed (even though the default URL of letsencrypt is staging server) according to example described in Ansible official document.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [ansible-letsencrypt : install certificates for Test Zabbix] **************
changed: [63.142.XX.YY] => {"authorizations": [{"challenges": [{"status": "pending", "token": "a6D8LriXpYFWBqPCCR9TLYcLhkuX78TTwDrwUWt_cD0", "type": "tls-sni-01", "uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/x2O9olBftqYw-1qfR2wXDyK-tTaDhkAOjq8qtyyXGH4/11844425"}, {"status": "pending", "token": "-EJuzZ8j_qy8g8zB9B4C9Zy2jXEAqwyGoSSB819Rxvc", "type": "http-01", "uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/x2O9olBftqYw-1qfR2wXDyK-tTaDhkAOjq8qtyyXGH4/11844426"}, {"status": "pending", "token": "5n9Z5muezqi911qAPPChA7rnnwxSPqcqphRJSnO7qXo", "type": "dns-01", "uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/x2O9olBftqYw-1qfR2wXDyK-tTaDhkAOjq8qtyyXGH4/11844427"}], "combinations": [[0], [1], [2]], "expires": "2016-09-03T07:39:24.419769206Z", "identifier": {"type": "dns", "value": "mydomain.com"}, "status": "pending", "uri": "https://acme-staging.api.letsencrypt.org/acme/authz/x2O9olBftqYw-1qfR2wXDyK-tTaDhkAOjq8qtyyXGH4"}], "cert_days": -1, "challenge_data": {"mydomain.com": {"dns-01": {"resource": "_acme-challenge", "resource_value": "ECW-zJ-oqqa0NrZwtpNNNFDoOrAX2HkpD5v1FhS0KI8"}, "http-01": {"resource": ".well-known/acme-challenge/-EJuzZ8j_qy8g8zB9B4C9Zy2jXEAqwyGoSSB819Rxvc", "resource_value": "-EJuzZ8j_qy8g8zB9B4C9Zy2jXEAqwyGoSSB819Rxvc.qyzD-ebStixuyLGMoFX7gLRCsylHqV5tZEOuzBX8bec"}}}, "changed": true}
TASK [ansible-letsencrypt : copy] **********************************************
changed: [63.142.XX.YY] => {"changed": true, "checksum": "2f898e335a26d8ba37ff9bd3d51e8bb25e85472b", "dest": "/var/www/html/.well-known/acme-challenge/-EJuzZ8j_qy8g8zB9B4C9Zy2jXEAqwyGoSSB819Rxvc", "gid": 0, "group": "root", "md5sum": "111c6531ebea8561bec0f77389fe921b", "mode": "0644", "owner": "root", "size": 87, "src": "/root/.ansible/tmp/ansible-tmp-1472283569.85-125669089251342/source", "state": "file", "uid": 0}
TASK [ansible-letsencrypt : install certificates for Zabbix] *******************
fatal: [63.142.XX.YY]: FAILED! => {"changed": false, "failed": true, "msg": "Error validating challenge: CODE: ************************ RESULT: {u'status': u'********', u'keyAuthorization': u'********', u'token': u'-EJuzZ8j_qy8g8zB9B4C9Zy********jXEAqwyGoSSB8********9Rxvc', u'type': u'http-****************', u'uri': u'********'}"}
to retry, use: --limit @bases.retry
```
By executing the script again, I received another error (this times, I used high verbosity -vvv)
```
TASK [ansible-letsencrypt : install certificates for Test Zabbix] **************
task path: /home/my_user/MY_USER/path/to/ansible-letsencrypt/tasks/main.yml:50
Using module file /usr/lib/python2.7/site-packages/ansible/modules/extras/web_infrastructure/letsencrypt.py
<63.142.XX.YY> ESTABLISH SSH CONNECTION FOR USER: root
<63.142.XX.YY> SSH: EXEC ssh -q -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/my_user/.ansible/cp/ansible-ssh-%h-%p-%r 63.142.XX.YY '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373 `" && echo ansible-tmp-1472285560.26-149187906418373="` echo $HOME/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373 `" ) && sleep 0'"'"''
<63.142.XX.YY> PUT /tmp/tmpOjTggd TO /root/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373/letsencrypt.py
<63.142.XX.YY> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/my_user/.ansible/cp/ansible-ssh-%h-%p-%r '[63.142.XX.YY]'
<63.142.XX.YY> ESTABLISH SSH CONNECTION FOR USER: root
<63.142.XX.YY> SSH: EXEC ssh -q -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/my_user/.ansible/cp/ansible-ssh-%h-%p-%r 63.142.XX.YY '/bin/sh -c '"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373/ /root/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373/letsencrypt.py && sleep 0'"'"''
<63.142.XX.YY> ESTABLISH SSH CONNECTION FOR USER: root
<63.142.XX.YY> SSH: EXEC ssh -q -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 -o ControlPath=/home/my_user/.ansible/cp/ansible-ssh-%h-%p-%r -tt 63.142.XX.YY '/bin/sh -c '"'"'/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373/letsencrypt.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1472285560.26-149187906418373/" > /dev/null 2>&1 && sleep 0'"'"''
fatal: [63.142.XX.YY]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"account_email": null,
"account_key": "/path/to/mydomain.com.key",
"acme_directory": "https://acme-staging.api.letsencrypt.org/directory",
"agreement": "https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf",
"challenge": "http-01",
"csr": "/path/to/mydomain.com.csr",
"data": null,
"dest": "/path/to/mydomain.com.crt",
"remaining_days": 10
},
"module_name": "letsencrypt"
},
"msg": "Error new cert: CODE: 400 RESULT: None"
}
to retry, use: --limit @bases.retry
PLAY RECAP *********************************************************************
```
| main | cannot request certificate letsencrypt with error message error validating challenge issue type bug report component name letsencrypt ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific laptop linux manjaro execute ansible playbook against centos server summary cannot request certificate letsencrypt with error message error validating challenge steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used follow the example of letsencrypt at step generate rsa key and csr at my laptop manjaro linux and copy to the centos server step check nginx config in centos server to guarantee that we can retrieve challenge file at path like step run ansible example script against that server ansible playbook i inventory u root bases yml v result certificate issuing failed with error message error validating challenge see below for full log name install certificates for test zabbix letsencrypt account key path to mydomain com mydomain com key csr path to mydomain com mydomain com csr dest path to mydomain com mydomain com crt register mydomain com challenge perform the necessary steps to fulfill the challenge for example copy dest var www html mydomain com challenge content mydomain com challenge when mydomain com challenge changed name install certificates for zabbix letsencrypt account key path to mydomain com mydomain com key csr path to mydomain com mydomain com csr dest path to mydomain com mydomain com crt data mydomain com challenge expected results i expect the certificate issuing succeed even though the default url of letsencrypt is staging server according to example described in ansible official document actual results task changed authorizations combinations expires identifier type dns value mydomain com status pending uri cert days challenge data mydomain com dns resource acme challenge resource value ecw zj http resource well known acme challenge resource value qyzd changed true task changed changed true checksum dest var www html well known acme challenge gid group root mode owner root size src root ansible tmp ansible tmp source state file uid task fatal failed changed false failed true msg error validating challenge code result u status u u keyauthorization u u token u u type u http u uri u to retry use limit bases retry by executing the script again i received another error this times i used high verbosity vvv task task path home my user my user path to ansible letsencrypt tasks main yml using module file usr lib site packages ansible modules extras web infrastructure letsencrypt py establish ssh connection for user root ssh exec ssh q c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home my user ansible cp ansible ssh h p r xx yy bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpojtggd to root ansible tmp ansible tmp letsencrypt py ssh exec sftp b c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home my user ansible cp ansible ssh h p r establish ssh connection for user root ssh exec ssh q c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home my user ansible cp ansible ssh h p r xx yy bin sh c chmod u x root ansible tmp ansible tmp root ansible tmp ansible tmp letsencrypt py sleep establish ssh connection for user root ssh exec ssh q c o controlmaster auto o controlpersist o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user root o connecttimeout o controlpath home my user ansible cp ansible ssh h p r tt xx yy bin sh c usr bin root ansible tmp ansible tmp letsencrypt py rm rf root ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args account email null account key path to mydomain com key acme directory agreement challenge http csr path to mydomain com csr data null dest path to mydomain com crt remaining days module name letsencrypt msg error new cert code result none to retry use limit bases retry play recap | 1 |
1,819 | 6,577,323,799 | IssuesEvent | 2017-09-12 00:06:41 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Unable to stop VM using azure_rm_virtualmachine with ssh key for RHEL VM's created in azure or using azure_rm_deployment | affects_2.1 azure bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
azure_rm_virtualmachine
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
NO Default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 14 /Linux
##### SUMMARY
<!--- Explain the problem briefly -->
Unable to stop VM using azure_rm_virtualmachine with ssh key for RHEL VM's created in azure or using azure_rm_deployment
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Create a RHEL VM with ssh key authentication, and run below playbook to stop created VM but we get error
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Power Off and On
azure_rm_virtualmachine:
resource_group: RG-APP
name: "{{ vmName }}"
started: False
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
VM to be stopped
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
cd3ans@precise64:~/ansibleplay$ ansible-playbook -vvv stopvm.yml --extra-vars "vmName=POCWEBD001"
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
PLAYBOOK: stopvm.yml ***********************************************************
1 plays in stopvm.yml
PLAY [localhost] ***************************************************************
TASK [Power Off and On] ********************************************************
task path: /home/cd3ans/ansibleplay/stopvm.yml:6
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: cd3ans
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281 `" && echo ansible-tmp-1466790398.4-178591174693281="` echo $HOME/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp3ysraT TO /home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/azure_rm_virtualmachine
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/azure_rm_virtualmachine; rm -rf "/home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"ad_user": null, "admin_password": null, "admin_username": null, "allocated": false, "append_tags": true, "client_id": null, "image": null, "location": null, "name": "POCWEBD001", "network_interface_names": null, "open_ports": null, "os_disk_caching": "ReadOnly", "os_type": "Linux", "password": null, "profile": null, "public_ip_allocation_method": "Static", "remove_on_absent": ["all"], "resource_group": "RG-APP", "restarted": false, "secret": null, "short_hostname": null, "ssh_password_enabled": true, "ssh_public_keys": null, "started": false, "state": "present", "storage_account_name": null, "storage_blob_name": null, "storage_container_name": "vhds", "subnet_name": null, "subscription_id": null, "tags": null, "tenant": null, "virtual_network_name": null, "vm_size": "Standard_D1"}, "module_name": "azure_rm_virtualmachine"}, "msg": "Error creating or updating virtual machinePOCWEBD001 - Changing property 'linuxConfiguration.ssh.publicKeys' is not allowed."}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @stopvm.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
| True | Unable to stop VM using azure_rm_virtualmachine with ssh key for RHEL VM's created in azure or using azure_rm_deployment - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
azure_rm_virtualmachine
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
NO Default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 14 /Linux
##### SUMMARY
<!--- Explain the problem briefly -->
Unable to stop VM using azure_rm_virtualmachine with ssh key for RHEL VM's created in azure or using azure_rm_deployment
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Create a RHEL VM with ssh key authentication, and run below playbook to stop created VM but we get error
<!--- Paste example playbooks or commands between quotes below -->
```
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Power Off and On
azure_rm_virtualmachine:
resource_group: RG-APP
name: "{{ vmName }}"
started: False
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
VM to be stopped
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
cd3ans@precise64:~/ansibleplay$ ansible-playbook -vvv stopvm.yml --extra-vars "vmName=POCWEBD001"
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
PLAYBOOK: stopvm.yml ***********************************************************
1 plays in stopvm.yml
PLAY [localhost] ***************************************************************
TASK [Power Off and On] ********************************************************
task path: /home/cd3ans/ansibleplay/stopvm.yml:6
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: cd3ans
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281 `" && echo ansible-tmp-1466790398.4-178591174693281="` echo $HOME/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmp3ysraT TO /home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/azure_rm_virtualmachine
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/azure_rm_virtualmachine; rm -rf "/home/cd3ans/.ansible/tmp/ansible-tmp-1466790398.4-178591174693281/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"ad_user": null, "admin_password": null, "admin_username": null, "allocated": false, "append_tags": true, "client_id": null, "image": null, "location": null, "name": "POCWEBD001", "network_interface_names": null, "open_ports": null, "os_disk_caching": "ReadOnly", "os_type": "Linux", "password": null, "profile": null, "public_ip_allocation_method": "Static", "remove_on_absent": ["all"], "resource_group": "RG-APP", "restarted": false, "secret": null, "short_hostname": null, "ssh_password_enabled": true, "ssh_public_keys": null, "started": false, "state": "present", "storage_account_name": null, "storage_blob_name": null, "storage_container_name": "vhds", "subnet_name": null, "subscription_id": null, "tags": null, "tenant": null, "virtual_network_name": null, "vm_size": "Standard_D1"}, "module_name": "azure_rm_virtualmachine"}, "msg": "Error creating or updating virtual machinePOCWEBD001 - Changing property 'linuxConfiguration.ssh.publicKeys' is not allowed."}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit @stopvm.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
```
| main | unable to stop vm using azure rm virtualmachine with ssh key for rhel vm s created in azure or using azure rm deployment issue type bug report component name azure rm virtualmachine ansible version ansible configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables no default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu linux summary unable to stop vm using azure rm virtualmachine with ssh key for rhel vm s created in azure or using azure rm deployment steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used create a rhel vm with ssh key authentication and run below playbook to stop created vm but we get error hosts localhost connection local gather facts no tasks name power off and on azure rm virtualmachine resource group rg app name vmname started false expected results vm to be stopped actual results ansibleplay ansible playbook vvv stopvm yml extra vars vmname using etc ansible ansible cfg as config file provided hosts list is empty only localhost is available playbook stopvm yml plays in stopvm yml play task task path home ansibleplay stopvm yml establish local connection for user exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible tmp ansible tmp azure rm virtualmachine exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible tmp ansible tmp azure rm virtualmachine rm rf home ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args ad user null admin password null admin username null allocated false append tags true client id null image null location null name network interface names null open ports null os disk caching readonly os type linux password null profile null public ip allocation method static remove on absent resource group rg app restarted false secret null short hostname null ssh password enabled true ssh public keys null started false state present storage account name null storage blob name null storage container name vhds subnet name null subscription id null tags null tenant null virtual network name null vm size standard module name azure rm virtualmachine msg error creating or updating virtual changing property linuxconfiguration ssh publickeys is not allowed no more hosts left to retry use limit stopvm retry play recap localhost ok changed unreachable failed | 1 |
307,611 | 23,207,373,657 | IssuesEvent | 2022-08-02 07:03:52 | ShawnLemuelDabi/ITP-Group-18 | https://api.github.com/repos/ShawnLemuelDabi/ITP-Group-18 | closed | [INVIGILATOR PRESENTATION] Have positive/negative sound effects | documentation | - [x] Positive sound effect when mascot is becoming stronger
- [ ] Negative sound effect when mascot is becoming weaker | 1.0 | [INVIGILATOR PRESENTATION] Have positive/negative sound effects - - [x] Positive sound effect when mascot is becoming stronger
- [ ] Negative sound effect when mascot is becoming weaker | non_main | have positive negative sound effects positive sound effect when mascot is becoming stronger negative sound effect when mascot is becoming weaker | 0 |
184,326 | 31,857,562,300 | IssuesEvent | 2023-09-15 08:35:18 | ugent-library/biblio-backoffice | https://api.github.com/repos/ugent-library/biblio-backoffice | closed | [do not close me, deliberately incomplete] Missing bootstrap components | design | - [ ] Create sm button that works with icons
- [ ] Create style for tooltip hovers without links
https://github.com/ugent-library/biblio-backoffice/issues/716 | 1.0 | [do not close me, deliberately incomplete] Missing bootstrap components - - [ ] Create sm button that works with icons
- [ ] Create style for tooltip hovers without links
https://github.com/ugent-library/biblio-backoffice/issues/716 | non_main | missing bootstrap components create sm button that works with icons create style for tooltip hovers without links | 0 |
10,328 | 7,154,935,519 | IssuesEvent | 2018-01-26 10:35:42 | mozilla/addons-frontend | https://api.github.com/repos/mozilla/addons-frontend | closed | Run developer-uploaded images through pngcrush/optipng/jpegoptim | component: performance triaged | We load a lot of images we don't control, and we should make sure they are as optimized as they can. This means running them through lossless compression tools like pngcrush, optipng, and jpegoptim. We can save a lot by doing this.
We did this in Marketplace a while ago. The main caveat was that cachebusting needed to take this into account, because it had to happen on a separate task, so we'd resize the image, save the file, then optimize them, overwriting the previous copy. If somehow a client accessed the resized image before the optimize step, we needed to load the newer version. It can work with the naive way we do cachebusting right now, but it would be better with https://github.com/mozilla/addons-server/issues/2659 instead. | True | Run developer-uploaded images through pngcrush/optipng/jpegoptim - We load a lot of images we don't control, and we should make sure they are as optimized as they can. This means running them through lossless compression tools like pngcrush, optipng, and jpegoptim. We can save a lot by doing this.
We did this in Marketplace a while ago. The main caveat was that cachebusting needed to take this into account, because it had to happen on a separate task, so we'd resize the image, save the file, then optimize them, overwriting the previous copy. If somehow a client accessed the resized image before the optimize step, we needed to load the newer version. It can work with the naive way we do cachebusting right now, but it would be better with https://github.com/mozilla/addons-server/issues/2659 instead. | non_main | run developer uploaded images through pngcrush optipng jpegoptim we load a lot of images we don t control and we should make sure they are as optimized as they can this means running them through lossless compression tools like pngcrush optipng and jpegoptim we can save a lot by doing this we did this in marketplace a while ago the main caveat was that cachebusting needed to take this into account because it had to happen on a separate task so we d resize the image save the file then optimize them overwriting the previous copy if somehow a client accessed the resized image before the optimize step we needed to load the newer version it can work with the naive way we do cachebusting right now but it would be better with instead | 0 |
4,781 | 24,607,369,906 | IssuesEvent | 2022-10-14 17:34:21 | duckduckgo/zeroclickinfo-fathead | https://api.github.com/repos/duckduckgo/zeroclickinfo-fathead | closed | Sqlalchemy parser throws IOError. | Bug Maintainer Input Requested Topic: Python | <!-- Please use the appropriate issue title format:
BUG FIX
{IA Name} Bug: {Short description of bug}
SUGGESTION
{IA Name} Suggestion: {Short description of suggestion}"
OTHER
{IA Name}: {Short description} -->
### Description
<!-- Describe the bug or suggestion in detail -->
Sqlalchemy parser throws IOError. Here's the traceback:
`
Traceback (most recent call last):
File "parse.py", line 409, in <module>
dp.get_pages()
File "parse.py", line 22, in get_pages
file = open(file_loc,'r+')
IOError: [Errno 2] No such file or directory: 'download/events.html'
`
## Steps to recreate
<!-- Describe the steps, or provide a link to an example search -->
Try to fetch the docs using `fetch.sh` and then parse using `parse.sh`.
## People to notify
<!-- Please @mention any relevant people/organizations here:-->
@moollaza
<!-- LANGUAGE LEADERS ONLY: REMOVE THIS LINE
## Get Started
- [ ] 1) Claim this issue by commenting below
- [ ] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-fathead/blob/master/CONTRIBUTING.md)
- [ ] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [ ] 4) Create a Pull Request
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
<!-- DO NOT REMOVE -->
---
<!-- The Instant Answer ID can be found by clicking the `?` icon beside the Instant Answer result on DuckDuckGo.com -->
Instant Answer Page: https://duck.co/ia/view/sqlalchemy
<!-- FILL THIS IN: ^^^^ -->
| True | Sqlalchemy parser throws IOError. - <!-- Please use the appropriate issue title format:
BUG FIX
{IA Name} Bug: {Short description of bug}
SUGGESTION
{IA Name} Suggestion: {Short description of suggestion}"
OTHER
{IA Name}: {Short description} -->
### Description
<!-- Describe the bug or suggestion in detail -->
Sqlalchemy parser throws IOError. Here's the traceback:
`
Traceback (most recent call last):
File "parse.py", line 409, in <module>
dp.get_pages()
File "parse.py", line 22, in get_pages
file = open(file_loc,'r+')
IOError: [Errno 2] No such file or directory: 'download/events.html'
`
## Steps to recreate
<!-- Describe the steps, or provide a link to an example search -->
Try to fetch the docs using `fetch.sh` and then parse using `parse.sh`.
## People to notify
<!-- Please @mention any relevant people/organizations here:-->
@moollaza
<!-- LANGUAGE LEADERS ONLY: REMOVE THIS LINE
## Get Started
- [ ] 1) Claim this issue by commenting below
- [ ] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-fathead/blob/master/CONTRIBUTING.md)
- [ ] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [ ] 4) Create a Pull Request
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
<!-- DO NOT REMOVE -->
---
<!-- The Instant Answer ID can be found by clicking the `?` icon beside the Instant Answer result on DuckDuckGo.com -->
Instant Answer Page: https://duck.co/ia/view/sqlalchemy
<!-- FILL THIS IN: ^^^^ -->
| main | sqlalchemy parser throws ioerror please use the appropriate issue title format bug fix ia name bug short description of bug suggestion ia name suggestion short description of suggestion other ia name short description description sqlalchemy parser throws ioerror here s the traceback traceback most recent call last file parse py line in dp get pages file parse py line in get pages file open file loc r ioerror no such file or directory download events html steps to recreate try to fetch the docs using fetch sh and then parse using parse sh people to notify moollaza language leaders only remove this line get started claim this issue by commenting below review our and fork this repository create a pull request resources join to ask questions join the to discuss project planning and instant answer metrics read the for technical help instant answer page | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.