Unnamed: 0
int64 1
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 3
438
| labels
stringlengths 4
308
| body
stringlengths 7
254k
| index
stringclasses 7
values | text_combine
stringlengths 96
254k
| label
stringclasses 2
values | text
stringlengths 96
246k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,532
| 13,911,878,387
|
IssuesEvent
|
2020-10-20 18:01:42
|
grey-software/Twitter-Focus
|
https://api.github.com/repos/grey-software/Twitter-Focus
|
opened
|
🚀 Feature Request: Add donate buttons to the README.md
|
Domain: User Experience Role: Maintainer Type: Enhancement hacktoberfest-accepted
|
### Problem Overview 👁️🗨️
Users should be able to donate/sponsor to Grey Software via the donate buttons on README.md for Twitter-Focus.
### What would you like? 🧰
Add the three donate buttons (PayPal, GitHub Sponsors and open-collective) to README.md for Twitter-Focus. The button style should be exactly like the one that can be found on the 'Call to Donate' box when the Twitter-Focus extension is used.
The image below shows the GitHub sponsors and the PayPal button. You would also need to add the open-collective donate button along with these two to README.md.

### What alternatives have you considered? 🔍
N/A
### Additional details ℹ️
Here is a linked issue https://github.com/grey-software/Twitter-Focus/issues/9.
Here is a linked PR https://github.com/grey-software/Twitter-Focus/pull/11.
|
True
|
🚀 Feature Request: Add donate buttons to the README.md - ### Problem Overview 👁️🗨️
Users should be able to donate/sponsor to Grey Software via the donate buttons on README.md for Twitter-Focus.
### What would you like? 🧰
Add the three donate buttons (PayPal, GitHub Sponsors and open-collective) to README.md for Twitter-Focus. The button style should be exactly like the one that can be found on the 'Call to Donate' box when the Twitter-Focus extension is used.
The image below shows the GitHub sponsors and the PayPal button. You would also need to add the open-collective donate button along with these two to README.md.

### What alternatives have you considered? 🔍
N/A
### Additional details ℹ️
Here is a linked issue https://github.com/grey-software/Twitter-Focus/issues/9.
Here is a linked PR https://github.com/grey-software/Twitter-Focus/pull/11.
|
main
|
🚀 feature request add donate buttons to the readme md problem overview 👁️🗨️ users should be able to donate sponsor to grey software via the donate buttons on readme md for twitter focus what would you like 🧰 add the three donate buttons paypal github sponsors and open collective to readme md for twitter focus the button style should be exactly like the one that can be found on the call to donate box when the twitter focus extension is used the image below shows the github sponsors and the paypal button you would also need to add the open collective donate button along with these two to readme md what alternatives have you considered 🔍 n a additional details ℹ️ here is a linked issue here is a linked pr
| 1
|
64,664
| 6,916,760,544
|
IssuesEvent
|
2017-11-29 04:38:34
|
brave/browser-laptop
|
https://api.github.com/repos/brave/browser-laptop
|
reopened
|
Claim token button should be hidden if a wallet is recovered with ugp token
|
0.19.x bug feature/ledger initiative/bat-payments QA/test-plan-specified release-notes/exclude
|
<!--
# Test plan
#
-->
### Description
Claim token button should be hidden if a wallet is recovered with ugp token
### Steps to Reproduce
1. Clean install 0.19.96
2. Create wallet and claim ugp tokens
3. Backup wallet and clear browser profile
4. Create a new browser profile
5. Enable wallet and recover the wallet from step 3
6. Claim free token button is still shown, clicking on the button shows promotion not available
**Actual result:**
```
>>> {"statusCode":422,"error":"Unprocessable Entity","message":"promotion already in use"}
Problem claiming promotion Error: HTTP response 422 for PUT /v1/grants/51245323-b45b---ee0584e183de
```
**Expected result:**
If wallet already contains ugp tokens, claim token button should not be shown
**Reproduces how often:**
100%
### Brave Version
**about:brave info:**
Brave | 0.19.96
-- | --
rev | 9d72944
Muon | 4.5.16
libchromiumcontent | 62.0.3202.94
V8 | 6.2.414.42
Node.js | 7.9.0
Update Channel | Release
OS Platform | Microsoft Windows
OS Release | 10.0.15063
OS Architecture | x64
**Reproducible on current live release:**
N/A
### Additional Information
Confirmed by @LaurenWags on macOS
|
1.0
|
Claim token button should be hidden if a wallet is recovered with ugp token - <!--
# Test plan
#
-->
### Description
Claim token button should be hidden if a wallet is recovered with ugp token
### Steps to Reproduce
1. Clean install 0.19.96
2. Create wallet and claim ugp tokens
3. Backup wallet and clear browser profile
4. Create a new browser profile
5. Enable wallet and recover the wallet from step 3
6. Claim free token button is still shown, clicking on the button shows promotion not available
**Actual result:**
```
>>> {"statusCode":422,"error":"Unprocessable Entity","message":"promotion already in use"}
Problem claiming promotion Error: HTTP response 422 for PUT /v1/grants/51245323-b45b---ee0584e183de
```
**Expected result:**
If wallet already contains ugp tokens, claim token button should not be shown
**Reproduces how often:**
100%
### Brave Version
**about:brave info:**
Brave | 0.19.96
-- | --
rev | 9d72944
Muon | 4.5.16
libchromiumcontent | 62.0.3202.94
V8 | 6.2.414.42
Node.js | 7.9.0
Update Channel | Release
OS Platform | Microsoft Windows
OS Release | 10.0.15063
OS Architecture | x64
**Reproducible on current live release:**
N/A
### Additional Information
Confirmed by @LaurenWags on macOS
|
non_main
|
claim token button should be hidden if a wallet is recovered with ugp token test plan description claim token button should be hidden if a wallet is recovered with ugp token steps to reproduce clean install create wallet and claim ugp tokens backup wallet and clear browser profile create a new browser profile enable wallet and recover the wallet from step claim free token button is still shown clicking on the button shows promotion not available actual result statuscode error unprocessable entity message promotion already in use problem claiming promotion error http response for put grants expected result if wallet already contains ugp tokens claim token button should not be shown reproduces how often brave version about brave info brave rev muon libchromiumcontent node js update channel release os platform microsoft windows os release os architecture reproducible on current live release n a additional information confirmed by laurenwags on macos
| 0
|
1,064
| 4,889,233,762
|
IssuesEvent
|
2016-11-18 09:31:26
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
include_role doesn't work with with_items and multi host
|
affects_2.2 bug_report waiting_on_maintainer
|
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Master: Ubuntu 16.04.2
Managed: Rhel 6.6
##### SUMMARY
include_role doesn't work with 'with_items' and multi host vars
##### STEPS TO REPRODUCE
Playbook
```
- hosts: ref
gather_facts: False
tasks:
- debug: var="item"
with_items: "{{ test_var }}"
- include_role:
name: "role_test"
vars:
r_var: "{{ item }}"
with_items: "{{ test_var }}"
```
roles/role_test/tasks/main.yml
```
---
- debug: var="r_var"
```
hosts:
```
[test]
host1
host2
```
host_vars/host1/main.yml
```
---
test_var:
- "host1_val1"
- "host1_val2"
```
host_vars/host2/main.yml
```
---
test_var:
- "host2_val1"
- "host2_val2"
```
##### EXPECTED RESULTS
```
PLAY [test] *********************************************************************
TASK [debug] *******************************************************************
ok: [host1] => (item=host1_val1) => {
"item": "host1_val1"
}
ok: [host1] => (item=host1_val2) => {
"item": "host1_val2"
}
ok: [host2] => (item=host2_val1) => {
"item": "host2_val1"
}
ok: [host2] => (item=host2_val2) => {
"item": "host2_val2"
}
TASK [include_role] ************************************************************
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host1_val1"
}
ok: [host2] => {
"r_var": "host2_val1"
}
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host1_val2"
}
ok: [host2] => {
"r_var": "host2_val2"
}
PLAY RECAP *********************************************************************
host1 : ok=3 changed=0 unreachable=0 failed=0
host2 : ok=3 changed=0 unreachable=0 failed=0
```
##### ACTUAL RESULTS
```
PLAY [test] *********************************************************************
TASK [debug] *******************************************************************
ok: [host1] => (item=host1_val1) => {
"item": "host1_val1"
}
ok: [host1] => (item=host1_val2) => {
"item": "host1_val2"
}
ok: [host2] => (item=host2_val1) => {
"item": "host2_val1"
}
ok: [host2] => (item=host2_val2) => {
"item": "host2_val2"
}
TASK [include_role] ************************************************************
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host2_val1"
}
ok: [host2] => {
"r_var": "host2_val1"
}
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host2_val2"
}
ok: [host2] => {
"r_var": "host2_val2"
}
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host1_val1"
}
ok: [host2] => {
"r_var": "host1_val1"
}
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host1_val2"
}
ok: [host12] => {
"r_var": "host1_val2"
}
PLAY RECAP *********************************************************************
host1 : ok=5 changed=0 unreachable=0 failed=0
host2 : ok=5 changed=0 unreachable=0 failed=0
```
If test_var is an empty list for host2, play stops in error :
ERROR! Unexpected Exception: 'results'
|
True
|
include_role doesn't work with with_items and multi host - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
include_role
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
Master: Ubuntu 16.04.2
Managed: Rhel 6.6
##### SUMMARY
include_role doesn't work with 'with_items' and multi host vars
##### STEPS TO REPRODUCE
Playbook
```
- hosts: ref
gather_facts: False
tasks:
- debug: var="item"
with_items: "{{ test_var }}"
- include_role:
name: "role_test"
vars:
r_var: "{{ item }}"
with_items: "{{ test_var }}"
```
roles/role_test/tasks/main.yml
```
---
- debug: var="r_var"
```
hosts:
```
[test]
host1
host2
```
host_vars/host1/main.yml
```
---
test_var:
- "host1_val1"
- "host1_val2"
```
host_vars/host2/main.yml
```
---
test_var:
- "host2_val1"
- "host2_val2"
```
##### EXPECTED RESULTS
```
PLAY [test] *********************************************************************
TASK [debug] *******************************************************************
ok: [host1] => (item=host1_val1) => {
"item": "host1_val1"
}
ok: [host1] => (item=host1_val2) => {
"item": "host1_val2"
}
ok: [host2] => (item=host2_val1) => {
"item": "host2_val1"
}
ok: [host2] => (item=host2_val2) => {
"item": "host2_val2"
}
TASK [include_role] ************************************************************
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host1_val1"
}
ok: [host2] => {
"r_var": "host2_val1"
}
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host1_val2"
}
ok: [host2] => {
"r_var": "host2_val2"
}
PLAY RECAP *********************************************************************
host1 : ok=3 changed=0 unreachable=0 failed=0
host2 : ok=3 changed=0 unreachable=0 failed=0
```
##### ACTUAL RESULTS
```
PLAY [test] *********************************************************************
TASK [debug] *******************************************************************
ok: [host1] => (item=host1_val1) => {
"item": "host1_val1"
}
ok: [host1] => (item=host1_val2) => {
"item": "host1_val2"
}
ok: [host2] => (item=host2_val1) => {
"item": "host2_val1"
}
ok: [host2] => (item=host2_val2) => {
"item": "host2_val2"
}
TASK [include_role] ************************************************************
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host2_val1"
}
ok: [host2] => {
"r_var": "host2_val1"
}
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host2_val2"
}
ok: [host2] => {
"r_var": "host2_val2"
}
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host1_val1"
}
ok: [host2] => {
"r_var": "host1_val1"
}
TASK [role_test : debug] *******************************************************
ok: [host1] => {
"r_var": "host1_val2"
}
ok: [host12] => {
"r_var": "host1_val2"
}
PLAY RECAP *********************************************************************
host1 : ok=5 changed=0 unreachable=0 failed=0
host2 : ok=5 changed=0 unreachable=0 failed=0
```
If test_var is an empty list for host2, play stops in error :
ERROR! Unexpected Exception: 'results'
|
main
|
include role doesn t work with with items and multi host issue type bug report component name include role ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration n a os environment master ubuntu managed rhel summary include role doesn t work with with items and multi host vars steps to reproduce playbook hosts ref gather facts false tasks debug var item with items test var include role name role test vars r var item with items test var roles role test tasks main yml debug var r var hosts host vars main yml test var host vars main yml test var expected results play task ok item item ok item item ok item item ok item item task task ok r var ok r var task ok r var ok r var play recap ok changed unreachable failed ok changed unreachable failed actual results play task ok item item ok item item ok item item ok item item task task ok r var ok r var task ok r var ok r var task ok r var ok r var task ok r var ok r var play recap ok changed unreachable failed ok changed unreachable failed if test var is an empty list for play stops in error error unexpected exception results
| 1
|
4,662
| 24,099,062,240
|
IssuesEvent
|
2022-09-19 21:49:00
|
aws/aws-sam-cli
|
https://api.github.com/repos/aws/aws-sam-cli
|
closed
|
Permissions Error: Unstable state when updating repo
|
area/init maintainer/need-followup
|
When creating a Lambda SAM Application, I get a permissions error. `Error: Unstable state when updating repo. Check that you have permissions to create/delete files in C:\Users\user\AppData\Roaming\AWS SAM directory` Attached is the log file
1. OS: Windows 10
2. `sam --version`: SAM CLI, version 1.35.0
3. AWS region: us-east-2
2021-11-19 15:05:02 [ERROR]: log level: info
2021-11-19 15:05:02 [INFO]: Retrieving AWS endpoint data
2021-11-19 15:05:02 [INFO]: OS: Windows_NT x64 10.0.19041
2021-11-19 15:05:02 [INFO]: Visual Studio Code Extension Host Version: 1.62.3AWS Toolkit Version: 1.33.0
2021-11-19 15:05:31 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:34 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:36 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:39 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:43 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:45 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:48 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:48 [WARN]: AwsContext: no default region in credentials profile, falling back to us-east-1:
2021-11-19 15:06:20 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:06:20 [INFO]: Running command: (not started) [C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd init --name lambda-nodejs12.x --no-interactive --dependency-manager npm --runtime nodejs12.x --app-template hello-world --architecture x86_64]
2021-11-19 15:06:20 [INFO]: SAM CLI not configured, using SAM found at: 'C:\\Program Files\\Amazon\\AWSSAMCLI\\bin\\sam.cmd'
2021-11-19 15:06:20 [INFO]: Running: (not started) [C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd init --name lambda-nodejs12.x --no-interactive --dependency-manager npm --runtime nodejs12.x --app-template hello-world --architecture x86_64]
2021-11-19 15:08:19 [ERROR]: Unexpected exitcode (1), expecting (0)
2021-11-19 15:08:19 [ERROR]: Error creating new SAM Application. Check the logs by running the "View AWS Toolkit Logs" command from the Command Palette.
2021-11-19 15:08:19 [ERROR]: Error: undefined
2021-11-19 15:08:19 [ERROR]: stderr: Cloning from https://github.com/aws/aws-sam-cli-app-templates
,Error: Unstable state when updating repo. Check that you have permissions to create/delete files in C:\Users\user\AppData\Roaming\AWS SAM directory or file an issue at https://github.com/aws/aws-sam-cli/issues
2021-11-19 15:08:19 [ERROR]: stdout:
2021-11-19 15:08:19 [ERROR]: Error creating new SAM Application: Error: Error with child process: Cloning from https://github.com/aws/aws-sam-cli-app-templates
,Error: Unstable state when updating repo. Check that you have permissions to create/delete files in C:\Users\user\AppData\Roaming\AWS SAM directory or file an issue at https://github.com/aws/aws-sam-cli/issues
at CL (c:\Users\user\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.33.0\dist\extension.js:2032:116)
at BS (c:\Users\user\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.33.0\dist\extension.js:2032:495)
at r3 (c:\Users\user\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.33.0\dist\extension.js:2348:555)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at N3 (c:\Users\user\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.33.0\dist\extension.js:2366:1855)
at c:\Users\user\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.33.0\dist\extension.js:2432:985
at o._executeContributedCommand (c:\Users\user\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\services\extensions\node\extensionHostProcess.js:94:111743)
2021-11-19 15:10:04 [INFO]: telemetry: sent batch (size=4)
|
True
|
Permissions Error: Unstable state when updating repo - When creating a Lambda SAM Application, I get a permissions error. `Error: Unstable state when updating repo. Check that you have permissions to create/delete files in C:\Users\user\AppData\Roaming\AWS SAM directory` Attached is the log file
1. OS: Windows 10
2. `sam --version`: SAM CLI, version 1.35.0
3. AWS region: us-east-2
2021-11-19 15:05:02 [ERROR]: log level: info
2021-11-19 15:05:02 [INFO]: Retrieving AWS endpoint data
2021-11-19 15:05:02 [INFO]: OS: Windows_NT x64 10.0.19041
2021-11-19 15:05:02 [INFO]: Visual Studio Code Extension Host Version: 1.62.3AWS Toolkit Version: 1.33.0
2021-11-19 15:05:31 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:34 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:36 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:39 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:43 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:45 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:48 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:05:48 [WARN]: AwsContext: no default region in credentials profile, falling back to us-east-1:
2021-11-19 15:06:20 [INFO]: SAM CLI location: C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd
2021-11-19 15:06:20 [INFO]: Running command: (not started) [C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd init --name lambda-nodejs12.x --no-interactive --dependency-manager npm --runtime nodejs12.x --app-template hello-world --architecture x86_64]
2021-11-19 15:06:20 [INFO]: SAM CLI not configured, using SAM found at: 'C:\\Program Files\\Amazon\\AWSSAMCLI\\bin\\sam.cmd'
2021-11-19 15:06:20 [INFO]: Running: (not started) [C:\Program Files\Amazon\AWSSAMCLI\bin\sam.cmd init --name lambda-nodejs12.x --no-interactive --dependency-manager npm --runtime nodejs12.x --app-template hello-world --architecture x86_64]
2021-11-19 15:08:19 [ERROR]: Unexpected exitcode (1), expecting (0)
2021-11-19 15:08:19 [ERROR]: Error creating new SAM Application. Check the logs by running the "View AWS Toolkit Logs" command from the Command Palette.
2021-11-19 15:08:19 [ERROR]: Error: undefined
2021-11-19 15:08:19 [ERROR]: stderr: Cloning from https://github.com/aws/aws-sam-cli-app-templates
,Error: Unstable state when updating repo. Check that you have permissions to create/delete files in C:\Users\user\AppData\Roaming\AWS SAM directory or file an issue at https://github.com/aws/aws-sam-cli/issues
2021-11-19 15:08:19 [ERROR]: stdout:
2021-11-19 15:08:19 [ERROR]: Error creating new SAM Application: Error: Error with child process: Cloning from https://github.com/aws/aws-sam-cli-app-templates
,Error: Unstable state when updating repo. Check that you have permissions to create/delete files in C:\Users\user\AppData\Roaming\AWS SAM directory or file an issue at https://github.com/aws/aws-sam-cli/issues
at CL (c:\Users\user\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.33.0\dist\extension.js:2032:116)
at BS (c:\Users\user\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.33.0\dist\extension.js:2032:495)
at r3 (c:\Users\user\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.33.0\dist\extension.js:2348:555)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at N3 (c:\Users\user\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.33.0\dist\extension.js:2366:1855)
at c:\Users\user\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.33.0\dist\extension.js:2432:985
at o._executeContributedCommand (c:\Users\user\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\services\extensions\node\extensionHostProcess.js:94:111743)
2021-11-19 15:10:04 [INFO]: telemetry: sent batch (size=4)
|
main
|
permissions error unstable state when updating repo when creating a lambda sam application i get a permissions error error unstable state when updating repo check that you have permissions to create delete files in c users user appdata roaming aws sam directory attached is the log file os windows sam version sam cli version aws region us east log level info retrieving aws endpoint data os windows nt visual studio code extension host version toolkit version sam cli location c program files amazon awssamcli bin sam cmd sam cli location c program files amazon awssamcli bin sam cmd sam cli location c program files amazon awssamcli bin sam cmd sam cli location c program files amazon awssamcli bin sam cmd sam cli location c program files amazon awssamcli bin sam cmd sam cli location c program files amazon awssamcli bin sam cmd sam cli location c program files amazon awssamcli bin sam cmd awscontext no default region in credentials profile falling back to us east sam cli location c program files amazon awssamcli bin sam cmd running command not started sam cli not configured using sam found at c program files amazon awssamcli bin sam cmd running not started unexpected exitcode expecting error creating new sam application check the logs by running the view aws toolkit logs command from the command palette error undefined stderr cloning from error unstable state when updating repo check that you have permissions to create delete files in c users user appdata roaming aws sam directory or file an issue at stdout error creating new sam application error error with child process cloning from error unstable state when updating repo check that you have permissions to create delete files in c users user appdata roaming aws sam directory or file an issue at at cl c users user vscode extensions amazonwebservices aws toolkit vscode dist extension js at bs c users user vscode extensions amazonwebservices aws toolkit vscode dist extension js at c users user vscode extensions amazonwebservices aws toolkit vscode dist extension js at processticksandrejections internal process task queues js at c users user vscode extensions amazonwebservices aws toolkit vscode dist extension js at c users user vscode extensions amazonwebservices aws toolkit vscode dist extension js at o executecontributedcommand c users user appdata local programs microsoft vs code resources app out vs workbench services extensions node extensionhostprocess js telemetry sent batch size
| 1
|
5,575
| 27,883,444,754
|
IssuesEvent
|
2023-03-21 21:20:16
|
MozillaFoundation/foundation.mozilla.org
|
https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org
|
closed
|
Convert .form-control stylings to Tailwind
|
engineering frontend SCSS/Tailwind maintain
|
Bring Bootstrap's `.form-control` stylings and our custom SCSS for that class to Tailwind.
Once this is done, we can then replace all the `.form-control` occurrences to `.tw-form-control`. (#10253)
|
True
|
Convert .form-control stylings to Tailwind - Bring Bootstrap's `.form-control` stylings and our custom SCSS for that class to Tailwind.
Once this is done, we can then replace all the `.form-control` occurrences to `.tw-form-control`. (#10253)
|
main
|
convert form control stylings to tailwind bring bootstrap s form control stylings and our custom scss for that class to tailwind once this is done we can then replace all the form control occurrences to tw form control
| 1
|
4,235
| 20,983,991,657
|
IssuesEvent
|
2022-03-28 23:39:33
|
walbourn/directx-vs-templates
|
https://api.github.com/repos/walbourn/directx-vs-templates
|
closed
|
STL Print HR Codes
|
maintainence
|
Suggestion: Using iomanip you can do some of the HR code printing in the following manner, specifically setfill setw and hex can do same work as fprints
```
#include <iostream>
#include <iomanip>
#include <thread>
inline void PrintHR(std::wstring message, HRESULT hr)
{
std::wcout << L"[" << std::setfill(L'0') << std::setw(8) << std::this_thread::get_id()
<< L"] " << message << L" HRESULT: 0x" << std::setfill(L'0') << std::setw(8) << std::hex << hr << std::endl;
}
```
|
True
|
STL Print HR Codes - Suggestion: Using iomanip you can do some of the HR code printing in the following manner, specifically setfill setw and hex can do same work as fprints
```
#include <iostream>
#include <iomanip>
#include <thread>
inline void PrintHR(std::wstring message, HRESULT hr)
{
std::wcout << L"[" << std::setfill(L'0') << std::setw(8) << std::this_thread::get_id()
<< L"] " << message << L" HRESULT: 0x" << std::setfill(L'0') << std::setw(8) << std::hex << hr << std::endl;
}
```
|
main
|
stl print hr codes suggestion using iomanip you can do some of the hr code printing in the following manner specifically setfill setw and hex can do same work as fprints include include include inline void printhr std wstring message hresult hr std wcout l std setfill l std setw std this thread get id l message l hresult std setfill l std setw std hex hr std endl
| 1
|
162,540
| 25,552,170,682
|
IssuesEvent
|
2022-11-30 01:19:08
|
phetsims/beers-law-lab
|
https://api.github.com/repos/phetsims/beers-law-lab
|
closed
|
Use NumberDisplay in ConcentrationMeterNode and ATDetectorNode
|
dev:phet-io design:general
|
ConcentrationMeterNode and ATDetectorNode currently do not use NumberDisplay. They create their own Text node and background for the text, then manage the layout.
There is a concession that will need to be made if we use NumberDisplay. These meters currently use ShadedRectangle to provide a psuedo-3D inset "look" to the display, and that is not supported by NumberDisplay.
|
1.0
|
Use NumberDisplay in ConcentrationMeterNode and ATDetectorNode - ConcentrationMeterNode and ATDetectorNode currently do not use NumberDisplay. They create their own Text node and background for the text, then manage the layout.
There is a concession that will need to be made if we use NumberDisplay. These meters currently use ShadedRectangle to provide a psuedo-3D inset "look" to the display, and that is not supported by NumberDisplay.
|
non_main
|
use numberdisplay in concentrationmeternode and atdetectornode concentrationmeternode and atdetectornode currently do not use numberdisplay they create their own text node and background for the text then manage the layout there is a concession that will need to be made if we use numberdisplay these meters currently use shadedrectangle to provide a psuedo inset look to the display and that is not supported by numberdisplay
| 0
|
324,681
| 9,907,453,943
|
IssuesEvent
|
2019-06-27 15:50:55
|
mantidproject/mantid
|
https://api.github.com/repos/mantidproject/mantid
|
closed
|
Colourfill plot on reduced SANS2D data gives an error
|
Component: Workbench Misc: Bug Priority: High
|
Matthew Andrew has an example of a reduced SANS2D dataset with ragged bins that does not plot in workbench with the colorfill option.
We need to grab the file and diagnose and fix the error.
|
1.0
|
Colourfill plot on reduced SANS2D data gives an error - Matthew Andrew has an example of a reduced SANS2D dataset with ragged bins that does not plot in workbench with the colorfill option.
We need to grab the file and diagnose and fix the error.
|
non_main
|
colourfill plot on reduced data gives an error matthew andrew has an example of a reduced dataset with ragged bins that does not plot in workbench with the colorfill option we need to grab the file and diagnose and fix the error
| 0
|
61,881
| 8,560,618,959
|
IssuesEvent
|
2018-11-09 02:10:11
|
docker/docker-py
|
https://api.github.com/repos/docker/docker-py
|
closed
|
refresh container.attrs
|
group/documentation level/dockerclient
|
Following code runs into a problem:
```
container = docker.from_env().containers.create(image=...)
container.start()
ip = container.attrs["NetworkSettings"]["Networks"]["bridge"]["IPAddress"]
# ip == "" :(, but I get it.
time.sleep(...)
ip = container.attrs["NetworkSettings"]["Networks"]["bridge"]["IPAddress"]
# ip == "" :(, still ???:-(???
```
|
1.0
|
refresh container.attrs - Following code runs into a problem:
```
container = docker.from_env().containers.create(image=...)
container.start()
ip = container.attrs["NetworkSettings"]["Networks"]["bridge"]["IPAddress"]
# ip == "" :(, but I get it.
time.sleep(...)
ip = container.attrs["NetworkSettings"]["Networks"]["bridge"]["IPAddress"]
# ip == "" :(, still ???:-(???
```
|
non_main
|
refresh container attrs following code runs into a problem container docker from env containers create image container start ip container attrs ip but i get it time sleep ip container attrs ip still
| 0
|
392,282
| 26,932,943,307
|
IssuesEvent
|
2023-02-07 18:10:51
|
SPLATteam/splat-tutorial
|
https://api.github.com/repos/SPLATteam/splat-tutorial
|
closed
|
User feedback from Farmata
|
documentation
|
We have received the following feedback from our intern, Farmata, who reviewed the documentation. As a new user, her feedback provides some fresh perspectives.
1. Mention that for a single country while running on "ReportGen-Annual" we have to select "Separate subregions" and not select MAINa in "Subregions/countries". Running the model [section](https://splat-tutorial.readthedocs.io/en/latest/getting_started.html#running-the-model).
2. When adding a new technology on "SpecificTech" explain that the attribute of the technologies is directly loaded. Also mention that adding a battery technology is not the same way as adding other technologies (I first try with adding battery technology and it didn't work, Bilal explained to me later that it's a specific case). Adding a technology [section](https://splat-tutorial.readthedocs.io/en/latest/working_with_technologies.html#adding-a-technology).
3. I also didn't find Renaming technology or deleting one intuitive as I didn't manage to do it. Renaming and deleting a technology [sections](https://splat-tutorial.readthedocs.io/en/latest/working_with_technologies.html#renaming-a-technology).
|
1.0
|
User feedback from Farmata - We have received the following feedback from our intern, Farmata, who reviewed the documentation. As a new user, her feedback provides some fresh perspectives.
1. Mention that for a single country while running on "ReportGen-Annual" we have to select "Separate subregions" and not select MAINa in "Subregions/countries". Running the model [section](https://splat-tutorial.readthedocs.io/en/latest/getting_started.html#running-the-model).
2. When adding a new technology on "SpecificTech" explain that the attribute of the technologies is directly loaded. Also mention that adding a battery technology is not the same way as adding other technologies (I first try with adding battery technology and it didn't work, Bilal explained to me later that it's a specific case). Adding a technology [section](https://splat-tutorial.readthedocs.io/en/latest/working_with_technologies.html#adding-a-technology).
3. I also didn't find Renaming technology or deleting one intuitive as I didn't manage to do it. Renaming and deleting a technology [sections](https://splat-tutorial.readthedocs.io/en/latest/working_with_technologies.html#renaming-a-technology).
|
non_main
|
user feedback from farmata we have received the following feedback from our intern farmata who reviewed the documentation as a new user her feedback provides some fresh perspectives mention that for a single country while running on reportgen annual we have to select separate subregions and not select maina in subregions countries running the model when adding a new technology on specifictech explain that the attribute of the technologies is directly loaded also mention that adding a battery technology is not the same way as adding other technologies i first try with adding battery technology and it didn t work bilal explained to me later that it s a specific case adding a technology i also didn t find renaming technology or deleting one intuitive as i didn t manage to do it renaming and deleting a technology
| 0
|
27,916
| 5,120,939,116
|
IssuesEvent
|
2017-01-09 07:23:03
|
primefaces/primeng
|
https://api.github.com/repos/primefaces/primeng
|
closed
|
Templating fails when using any *ng directives
|
defect
|
**I'm submitting a ...** (check one with "x")
```
[ X] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
**Plunkr to demonstrate the bug with PrimeNG 1.1.1:** https://plnkr.co/edit/BQAjYy?p=preview
**Current behavior**
If `*ngIf` is used in a `<template>` inside `<p-dataList>` (and `p-orderList`, and probably other components that iterate over a template), the render of the initial data seems correct. However, adding new items to the list causes the template rendering to fail.
**Expected behavior**
Template should render correctly when `*ngIf` is used in the template.
**Minimal reproduction of the problem with instructions**
Plunker at https://plnkr.co/edit/BQAjYy?p=preview reproduces the issue.
* **Angular version:** Plunkr uses latest from npm.
* **PrimeNG version:** 1.1.1
* **Browser:** all
* **Language:** all
|
1.0
|
Templating fails when using any *ng directives - **I'm submitting a ...** (check one with "x")
```
[ X] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
**Plunkr to demonstrate the bug with PrimeNG 1.1.1:** https://plnkr.co/edit/BQAjYy?p=preview
**Current behavior**
If `*ngIf` is used in a `<template>` inside `<p-dataList>` (and `p-orderList`, and probably other components that iterate over a template), the render of the initial data seems correct. However, adding new items to the list causes the template rendering to fail.
**Expected behavior**
Template should render correctly when `*ngIf` is used in the template.
**Minimal reproduction of the problem with instructions**
Plunker at https://plnkr.co/edit/BQAjYy?p=preview reproduces the issue.
* **Angular version:** Plunkr uses latest from npm.
* **PrimeNG version:** 1.1.1
* **Browser:** all
* **Language:** all
|
non_main
|
templating fails when using any ng directives i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports plunkr to demonstrate the bug with primeng current behavior if ngif is used in a inside and p orderlist and probably other components that iterate over a template the render of the initial data seems correct however adding new items to the list causes the template rendering to fail expected behavior template should render correctly when ngif is used in the template minimal reproduction of the problem with instructions plunker at reproduces the issue angular version plunkr uses latest from npm primeng version browser all language all
| 0
|
3,385
| 13,127,117,970
|
IssuesEvent
|
2020-08-06 09:46:06
|
gama-platform/gama
|
https://api.github.com/repos/gama-platform/gama
|
closed
|
Setup SonarCloud on GAMA Repo
|
🛠 Affects Maintainability 🤔 > Question 🤗 > Enhancement
|
**Is your request related to a problem? Please describe.**
It could be a great idea to use a code covering service on the GAMA repo. It will help to see what files are forgotten in the repo, clean dead code, etc
**Describe the solution you'd like**
I know [SonarQube software](https://www.sonarqube.org/) which works very well for Java projects. And it can easily be used (for free) on the [SonarCloud platform](https://sonarcloud.io/) (for the dashboard) with Travis CI (to fetch code at each commit)! :smile:
**Additional context**
I've already linked the organization and the project [here](https://sonarcloud.io/organizations/gama-platform/projects), I just need the help of @hqnghi88 to [setup Travis](https://docs.travis-ci.com/user/sonarcloud/#inspecting-code-with-the-sonarqube-scanner) properly (I've started by setting the `SONAR_TOKEN` :wink: )
|
True
|
Setup SonarCloud on GAMA Repo - **Is your request related to a problem? Please describe.**
It could be a great idea to use a code covering service on the GAMA repo. It will help to see what files are forgotten in the repo, clean dead code, etc
**Describe the solution you'd like**
I know [SonarQube software](https://www.sonarqube.org/) which works very well for Java projects. And it can easily be used (for free) on the [SonarCloud platform](https://sonarcloud.io/) (for the dashboard) with Travis CI (to fetch code at each commit)! :smile:
**Additional context**
I've already linked the organization and the project [here](https://sonarcloud.io/organizations/gama-platform/projects), I just need the help of @hqnghi88 to [setup Travis](https://docs.travis-ci.com/user/sonarcloud/#inspecting-code-with-the-sonarqube-scanner) properly (I've started by setting the `SONAR_TOKEN` :wink: )
|
main
|
setup sonarcloud on gama repo is your request related to a problem please describe it could be a great idea to use a code covering service on the gama repo it will help to see what files are forgotten in the repo clean dead code etc describe the solution you d like i know which works very well for java projects and it can easily be used for free on the for the dashboard with travis ci to fetch code at each commit smile additional context i ve already linked the organization and the project i just need the help of to properly i ve started by setting the sonar token wink
| 1
|
2,171
| 7,611,041,361
|
IssuesEvent
|
2018-05-01 11:57:23
|
beefproject/beef
|
https://api.github.com/repos/beefproject/beef
|
opened
|
Ruby 2.5
|
Maintainability Ruby 2.5
|
Test BeEF on Ruby 2.5.x.
Initial tests show that BeEF runs with Ruby 2.5.1, however database locking may or may not be a problem. This may or may not be an issue related to the SQLite dependencies.
|
True
|
Ruby 2.5 - Test BeEF on Ruby 2.5.x.
Initial tests show that BeEF runs with Ruby 2.5.1, however database locking may or may not be a problem. This may or may not be an issue related to the SQLite dependencies.
|
main
|
ruby test beef on ruby x initial tests show that beef runs with ruby however database locking may or may not be a problem this may or may not be an issue related to the sqlite dependencies
| 1
|
5,406
| 27,127,512,718
|
IssuesEvent
|
2023-02-16 07:03:13
|
OpenRefine/OpenRefine
|
https://api.github.com/repos/OpenRefine/OpenRefine
|
closed
|
Failing Cypress test in the CI
|
bug maintainability CI/CD
|
Our CI is red since 37a443d735811be58892d9dcbd5c36c835cddc6f, although the cause is likely external. We should fix this.
|
True
|
Failing Cypress test in the CI - Our CI is red since 37a443d735811be58892d9dcbd5c36c835cddc6f, although the cause is likely external. We should fix this.
|
main
|
failing cypress test in the ci our ci is red since although the cause is likely external we should fix this
| 1
|
486,742
| 14,013,318,071
|
IssuesEvent
|
2020-10-29 10:15:47
|
netdata/netdata
|
https://api.github.com/repos/netdata/netdata
|
closed
|
Netdata Cloud ECN href tag appears to be formatted incorrectly
|
area/web bug cloud priority/medium
|
<!--
When creating a bug report please:
- Verify first that your issue is not already reported on GitHub.
- Test if the latest release and master branch are affected too.
-->
##### Bug report summary
<!-- Provide a clear and concise description of the bug you're experiencing. -->
When you navigate to a specific node in Netdata Cloud and scroll to ECN, the raw HTML is visible instead of a link.

I inspected the webpage HTML and found that the <a> tag was using < and > instead of < and >.

Upon changing the HTML with < and > in the <a> tag, I was able to view the link properly.


##### OS / Environment
<!--
Provide as much information about your environment (which operating system and distribution you're using, if Netdata is running in a container, etc.)
as possible to allow us reproduce this bug faster.
To get this information, execute the following commands based on your operating system:
- uname -a; grep -Hv "^#" /etc/*release # Linux
- uname -a; uname -K # BSD
- uname -a; sw_vers # macOS
Place the output from the command in the code section below.
-->
```
Linux redacted 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
/etc/centos-release:CentOS Linux release 8.2.2004 (Core)
/etc/os-release:NAME="CentOS Linux"
/etc/os-release:VERSION="8 (Core)"
/etc/os-release:ID="centos"
/etc/os-release:ID_LIKE="rhel fedora"
/etc/os-release:VERSION_ID="8"
/etc/os-release:PLATFORM_ID="platform:el8"
/etc/os-release:PRETTY_NAME="CentOS Linux 8 (Core)"
/etc/os-release:ANSI_COLOR="0;31"
/etc/os-release:CPE_NAME="cpe:/o:centos:centos:8"
/etc/os-release:HOME_URL="https://www.centos.org/"
/etc/os-release:BUG_REPORT_URL="https://bugs.centos.org/"
/etc/os-release:
/etc/os-release:CENTOS_MANTISBT_PROJECT="CentOS-8"
/etc/os-release:CENTOS_MANTISBT_PROJECT_VERSION="8"
/etc/os-release:REDHAT_SUPPORT_PRODUCT="centos"
/etc/os-release:REDHAT_SUPPORT_PRODUCT_VERSION="8"
/etc/os-release:
/etc/redhat-release:CentOS Linux release 8.2.2004 (Core)
/etc/system-release:CentOS Linux release 8.2.2004 (Core)
```
##### Netdata version
<!--
Provide output of `netdata -V`.
If Netdata is running, execute: $(ps aux | grep -E -o "[a-zA-Z/]+netdata ") -V
-->
netdata v1.26.0-43-nightly
##### Component Name
<!--
Let us know which component is affected by the bug. Our code is structured according to its component,
so the component name is the same as the top level directory of the repository.
For example, a bug in the dashboard would be under the web component.
-->
Netdata Cloud
##### Steps To Reproduce
<!--
Describe how you found this bug and how we can reproduce it, preferably with a minimal test-case scenario.
If you'd like to attach larger files, use gist.github.com and paste in links.
-->
1. Open Netdata Cloud in your browser
2. Navigate to one of your nodes
3. Scroll down to ECN
4. You will see raw HTML instead of a link to the Wikipedia article
##### Expected behavior
<!-- Provide a clear and concise description of what you expected to happen. -->
I expected a link to the article to show up instead of raw HTML. I was able to reproduce this on a few other nodes running different netdata versions in Netdata Cloud. This does not affect the netdata web interface at *:19999, only Netdata Cloud.
|
1.0
|
Netdata Cloud ECN href tag appears to be formatted incorrectly - <!--
When creating a bug report please:
- Verify first that your issue is not already reported on GitHub.
- Test if the latest release and master branch are affected too.
-->
##### Bug report summary
<!-- Provide a clear and concise description of the bug you're experiencing. -->
When you navigate to a specific node in Netdata Cloud and scroll to ECN, the raw HTML is visible instead of a link.

I inspected the webpage HTML and found that the <a> tag was using < and > instead of < and >.

Upon changing the HTML with < and > in the <a> tag, I was able to view the link properly.


##### OS / Environment
<!--
Provide as much information about your environment (which operating system and distribution you're using, if Netdata is running in a container, etc.)
as possible to allow us reproduce this bug faster.
To get this information, execute the following commands based on your operating system:
- uname -a; grep -Hv "^#" /etc/*release # Linux
- uname -a; uname -K # BSD
- uname -a; sw_vers # macOS
Place the output from the command in the code section below.
-->
```
Linux redacted 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
/etc/centos-release:CentOS Linux release 8.2.2004 (Core)
/etc/os-release:NAME="CentOS Linux"
/etc/os-release:VERSION="8 (Core)"
/etc/os-release:ID="centos"
/etc/os-release:ID_LIKE="rhel fedora"
/etc/os-release:VERSION_ID="8"
/etc/os-release:PLATFORM_ID="platform:el8"
/etc/os-release:PRETTY_NAME="CentOS Linux 8 (Core)"
/etc/os-release:ANSI_COLOR="0;31"
/etc/os-release:CPE_NAME="cpe:/o:centos:centos:8"
/etc/os-release:HOME_URL="https://www.centos.org/"
/etc/os-release:BUG_REPORT_URL="https://bugs.centos.org/"
/etc/os-release:
/etc/os-release:CENTOS_MANTISBT_PROJECT="CentOS-8"
/etc/os-release:CENTOS_MANTISBT_PROJECT_VERSION="8"
/etc/os-release:REDHAT_SUPPORT_PRODUCT="centos"
/etc/os-release:REDHAT_SUPPORT_PRODUCT_VERSION="8"
/etc/os-release:
/etc/redhat-release:CentOS Linux release 8.2.2004 (Core)
/etc/system-release:CentOS Linux release 8.2.2004 (Core)
```
##### Netdata version
<!--
Provide output of `netdata -V`.
If Netdata is running, execute: $(ps aux | grep -E -o "[a-zA-Z/]+netdata ") -V
-->
netdata v1.26.0-43-nightly
##### Component Name
<!--
Let us know which component is affected by the bug. Our code is structured according to its component,
so the component name is the same as the top level directory of the repository.
For example, a bug in the dashboard would be under the web component.
-->
Netdata Cloud
##### Steps To Reproduce
<!--
Describe how you found this bug and how we can reproduce it, preferably with a minimal test-case scenario.
If you'd like to attach larger files, use gist.github.com and paste in links.
-->
1. Open Netdata Cloud in your browser
2. Navigate to one of your nodes
3. Scroll down to ECN
4. You will see raw HTML instead of a link to the Wikipedia article
##### Expected behavior
<!-- Provide a clear and concise description of what you expected to happen. -->
I expected a link to the article to show up instead of raw HTML. I was able to reproduce this on a few other nodes running different netdata versions in Netdata Cloud. This does not affect the netdata web interface at *:19999, only Netdata Cloud.
|
non_main
|
netdata cloud ecn href tag appears to be formatted incorrectly when creating a bug report please verify first that your issue is not already reported on github test if the latest release and master branch are affected too bug report summary when you navigate to a specific node in netdata cloud and scroll to ecn the raw html is visible instead of a link i inspected the webpage html and found that the tag was using lt and gt instead of upon changing the html with in the tag i was able to view the link properly os environment provide as much information about your environment which operating system and distribution you re using if netdata is running in a container etc as possible to allow us reproduce this bug faster to get this information execute the following commands based on your operating system uname a grep hv etc release linux uname a uname k bsd uname a sw vers macos place the output from the command in the code section below linux redacted smp mon sep utc gnu linux etc centos release centos linux release core etc os release name centos linux etc os release version core etc os release id centos etc os release id like rhel fedora etc os release version id etc os release platform id platform etc os release pretty name centos linux core etc os release ansi color etc os release cpe name cpe o centos centos etc os release home url etc os release bug report url etc os release etc os release centos mantisbt project centos etc os release centos mantisbt project version etc os release redhat support product centos etc os release redhat support product version etc os release etc redhat release centos linux release core etc system release centos linux release core netdata version provide output of netdata v if netdata is running execute ps aux grep e o netdata v netdata nightly component name let us know which component is affected by the bug our code is structured according to its component so the component name is the same as the top level directory of the repository for example a bug in the dashboard would be under the web component netdata cloud steps to reproduce describe how you found this bug and how we can reproduce it preferably with a minimal test case scenario if you d like to attach larger files use gist github com and paste in links open netdata cloud in your browser navigate to one of your nodes scroll down to ecn you will see raw html instead of a link to the wikipedia article expected behavior i expected a link to the article to show up instead of raw html i was able to reproduce this on a few other nodes running different netdata versions in netdata cloud this does not affect the netdata web interface at only netdata cloud
| 0
|
34,398
| 9,369,002,688
|
IssuesEvent
|
2019-04-03 09:59:35
|
FRRouting/frr
|
https://api.github.com/repos/FRRouting/frr
|
closed
|
can not compile at netbsd-8
|
build
|
I can't compile frr-7.0 at netbsd-8. Compile error:
```
gmake[1]: Entering directory '/tmp/frr-frr-7.0'
CC zebra/kernel_socket.o
zebra/kernel_socket.c:151:13: error: 'RTM_RESOLVE' undeclared here (not in a function)
{RTM_RESOLVE, "RTM_RESOLVE"},
^
gmake[1]: *** [Makefile:6956: zebra/kernel_socket.o] Error 1
gmake[1]: Leaving directory '/tmp/frr-frr-7.0'
gmake: *** [Makefile:4084: all] Error 2
```
at netbsd-8: less /usr/include/net/route.h shows:
```
…
#define RTM_OLDADD 0x9 /* caused by SIOCADDRT */
#define RTM_OLDDEL 0xa /* caused by SIOCDELRT */
// #define RTM_RESOLVE 0xb /* req to resolve dst to LL addr */
#define RTM_ONEWADDR 0xc /* Old (pre-8.0) RTM_NEWADDR message */
#define RTM_ODELADDR 0xd /* Old (pre-8.0) RTM_DELADDR message */
#define RTM_OOIFINFO 0xe /* Old (pre-1.5) RTM_IFINFO message */
...
```
Is there any workaround availbile to compile frr at netbsd-8?
Thank you for your efforts
Regards
Uwe
|
1.0
|
can not compile at netbsd-8 - I can't compile frr-7.0 at netbsd-8. Compile error:
```
gmake[1]: Entering directory '/tmp/frr-frr-7.0'
CC zebra/kernel_socket.o
zebra/kernel_socket.c:151:13: error: 'RTM_RESOLVE' undeclared here (not in a function)
{RTM_RESOLVE, "RTM_RESOLVE"},
^
gmake[1]: *** [Makefile:6956: zebra/kernel_socket.o] Error 1
gmake[1]: Leaving directory '/tmp/frr-frr-7.0'
gmake: *** [Makefile:4084: all] Error 2
```
at netbsd-8: less /usr/include/net/route.h shows:
```
…
#define RTM_OLDADD 0x9 /* caused by SIOCADDRT */
#define RTM_OLDDEL 0xa /* caused by SIOCDELRT */
// #define RTM_RESOLVE 0xb /* req to resolve dst to LL addr */
#define RTM_ONEWADDR 0xc /* Old (pre-8.0) RTM_NEWADDR message */
#define RTM_ODELADDR 0xd /* Old (pre-8.0) RTM_DELADDR message */
#define RTM_OOIFINFO 0xe /* Old (pre-1.5) RTM_IFINFO message */
...
```
Is there any workaround availbile to compile frr at netbsd-8?
Thank you for your efforts
Regards
Uwe
|
non_main
|
can not compile at netbsd i can t compile frr at netbsd compile error gmake entering directory tmp frr frr cc zebra kernel socket o zebra kernel socket c error rtm resolve undeclared here not in a function rtm resolve rtm resolve gmake error gmake leaving directory tmp frr frr gmake error at netbsd less usr include net route h shows … define rtm oldadd caused by siocaddrt define rtm olddel caused by siocdelrt define rtm resolve req to resolve dst to ll addr define rtm onewaddr old pre rtm newaddr message define rtm odeladdr old pre rtm deladdr message define rtm ooifinfo old pre rtm ifinfo message is there any workaround availbile to compile frr at netbsd thank you for your efforts regards uwe
| 0
|
524
| 2,957,024,724
|
IssuesEvent
|
2015-07-08 14:35:11
|
Yoast/wordpress-seo
|
https://api.github.com/repos/Yoast/wordpress-seo
|
closed
|
issue with Visual Composer
|
Compatibility Wait for Feedback
|
Hello, I am having issue with Visual Composer. it came with a theme
here is a screenshot: https://db.tt/aYMG3Ood
|
True
|
issue with Visual Composer - Hello, I am having issue with Visual Composer. it came with a theme
here is a screenshot: https://db.tt/aYMG3Ood
|
non_main
|
issue with visual composer hello i am having issue with visual composer it came with a theme here is a screenshot
| 0
|
926
| 4,629,595,740
|
IssuesEvent
|
2016-09-28 09:46:49
|
caskroom/homebrew-cask
|
https://api.github.com/repos/caskroom/homebrew-cask
|
opened
|
Proposal: get rid of `Hardware::CPU.is_32_bit?` conditionals
|
awaiting maintainer feedback cask
|
Snow Leopard was the last macOS release to support 32-bit. We can’t even guarantee HBC works that far back, and we certainly shouldn’t go out of our way to support such old versions.
As such, the `Hardware::CPU.is_32_bit?` seems useless. I propose we simply get rid of those conditionals altogether.
Casks in main repo (`grep -R 'Hardware::CPU.is_32_bit' "$(brew --repository)/Library/Taps/caskroom/homebrew-cask/Casks" | sed -E 's|.*/(.*)\.rb.*|- [ ] [\1](../tree/master/Casks/\1.rb)|' | pbcopy`):
- [ ] [ableton-live](../tree/master/Casks/ableton-live.rb)
- [ ] [aquamacs](../tree/master/Casks/aquamacs.rb)
- [ ] [gambit-c](../tree/master/Casks/gambit-c.rb)
- [ ] [geppetto](../tree/master/Casks/geppetto.rb)
- [ ] [gnubg](../tree/master/Casks/gnubg.rb)
- [ ] [libreoffice](../tree/master/Casks/libreoffice.rb)
- [ ] [ngrok](../tree/master/Casks/ngrok.rb)
- [ ] [p4](../tree/master/Casks/p4.rb)
- [ ] [pacifist](../tree/master/Casks/pacifist.rb)
- [ ] [plex-home-theater](../tree/master/Casks/plex-home-theater.rb)
- [ ] [praat](../tree/master/Casks/praat.rb)
- [ ] [razorsql](../tree/master/Casks/razorsql.rb)
- [ ] [reaper](../tree/master/Casks/reaper.rb)
- [ ] [scala-ide](../tree/master/Casks/scala-ide.rb)
- [ ] [story-writer](../tree/master/Casks/story-writer.rb)
- [ ] [streamtools](../tree/master/Casks/streamtools.rb)
- [ ] [supersync](../tree/master/Casks/supersync.rb)
- [ ] [tiddlywiki](../tree/master/Casks/tiddlywiki.rb)
- [ ] [vega](../tree/master/Casks/vega.rb)
- [ ] [vuescan](../tree/master/Casks/vuescan.rb)
- [ ] [wkhtmltopdf](../tree/master/Casks/wkhtmltopdf.rb)
Casks in [caskroom/versions](https://github.com/caskroom/homebrew-versions) (`grep -R 'Hardware::CPU.is_32_bit' "$(brew --repository)/Library/Taps/caskroom/homebrew-versions/Casks" | sed -E 's|.*/(.*)\.rb.*|- [ ] [\1](../tree/master/Casks/\1.rb)|' | pbcopy`):
- [ ] [ableton-live-beta](../tree/master/Casks/ableton-live-beta.rb)
- [ ] [ableton-live-standard](../tree/master/Casks/ableton-live-standard.rb)
- [ ] [ableton-live-suite](../tree/master/Casks/ableton-live-suite.rb)
|
True
|
Proposal: get rid of `Hardware::CPU.is_32_bit?` conditionals - Snow Leopard was the last macOS release to support 32-bit. We can’t even guarantee HBC works that far back, and we certainly shouldn’t go out of our way to support such old versions.
As such, the `Hardware::CPU.is_32_bit?` seems useless. I propose we simply get rid of those conditionals altogether.
Casks in main repo (`grep -R 'Hardware::CPU.is_32_bit' "$(brew --repository)/Library/Taps/caskroom/homebrew-cask/Casks" | sed -E 's|.*/(.*)\.rb.*|- [ ] [\1](../tree/master/Casks/\1.rb)|' | pbcopy`):
- [ ] [ableton-live](../tree/master/Casks/ableton-live.rb)
- [ ] [aquamacs](../tree/master/Casks/aquamacs.rb)
- [ ] [gambit-c](../tree/master/Casks/gambit-c.rb)
- [ ] [geppetto](../tree/master/Casks/geppetto.rb)
- [ ] [gnubg](../tree/master/Casks/gnubg.rb)
- [ ] [libreoffice](../tree/master/Casks/libreoffice.rb)
- [ ] [ngrok](../tree/master/Casks/ngrok.rb)
- [ ] [p4](../tree/master/Casks/p4.rb)
- [ ] [pacifist](../tree/master/Casks/pacifist.rb)
- [ ] [plex-home-theater](../tree/master/Casks/plex-home-theater.rb)
- [ ] [praat](../tree/master/Casks/praat.rb)
- [ ] [razorsql](../tree/master/Casks/razorsql.rb)
- [ ] [reaper](../tree/master/Casks/reaper.rb)
- [ ] [scala-ide](../tree/master/Casks/scala-ide.rb)
- [ ] [story-writer](../tree/master/Casks/story-writer.rb)
- [ ] [streamtools](../tree/master/Casks/streamtools.rb)
- [ ] [supersync](../tree/master/Casks/supersync.rb)
- [ ] [tiddlywiki](../tree/master/Casks/tiddlywiki.rb)
- [ ] [vega](../tree/master/Casks/vega.rb)
- [ ] [vuescan](../tree/master/Casks/vuescan.rb)
- [ ] [wkhtmltopdf](../tree/master/Casks/wkhtmltopdf.rb)
Casks in [caskroom/versions](https://github.com/caskroom/homebrew-versions) (`grep -R 'Hardware::CPU.is_32_bit' "$(brew --repository)/Library/Taps/caskroom/homebrew-versions/Casks" | sed -E 's|.*/(.*)\.rb.*|- [ ] [\1](../tree/master/Casks/\1.rb)|' | pbcopy`):
- [ ] [ableton-live-beta](../tree/master/Casks/ableton-live-beta.rb)
- [ ] [ableton-live-standard](../tree/master/Casks/ableton-live-standard.rb)
- [ ] [ableton-live-suite](../tree/master/Casks/ableton-live-suite.rb)
|
main
|
proposal get rid of hardware cpu is bit conditionals snow leopard was the last macos release to support bit we can’t even guarantee hbc works that far back and we certainly shouldn’t go out of our way to support such old versions as such the hardware cpu is bit seems useless i propose we simply get rid of those conditionals altogether casks in main repo grep r hardware cpu is bit brew repository library taps caskroom homebrew cask casks sed e s rb tree master casks rb pbcopy tree master casks ableton live rb tree master casks aquamacs rb tree master casks gambit c rb tree master casks geppetto rb tree master casks gnubg rb tree master casks libreoffice rb tree master casks ngrok rb tree master casks rb tree master casks pacifist rb tree master casks plex home theater rb tree master casks praat rb tree master casks razorsql rb tree master casks reaper rb tree master casks scala ide rb tree master casks story writer rb tree master casks streamtools rb tree master casks supersync rb tree master casks tiddlywiki rb tree master casks vega rb tree master casks vuescan rb tree master casks wkhtmltopdf rb casks in grep r hardware cpu is bit brew repository library taps caskroom homebrew versions casks sed e s rb tree master casks rb pbcopy tree master casks ableton live beta rb tree master casks ableton live standard rb tree master casks ableton live suite rb
| 1
|
997
| 4,760,612,398
|
IssuesEvent
|
2016-10-25 03:57:51
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
copy module should preserve mode of src file when mode is not explicitly set
|
affects_1.9 bug_report feature_idea in progress waiting_on_maintainer
|
##### Issue Type: Bug Report
##### Component Name: copy module
##### Ansible Version: 1.9.0.1
##### Environment: *ix
##### Summary:
When using the copy module to copy multiple files to their destination, it would be really useful if the mode of the source was preserved. This would allow copying several files in a list without explicitly setting modes for each one.
In any case, this is the expected behavior when going by the documentation (which does not specify a default for mode -- leading one to assume the mode would be preserved unless explicitly specified).
##### Steps To Reproduce:
- name: copy test
copy: src={{ item }} dest=/destination owner=root group=root
with_items:
- test1.sh
- test1.conf
with test1.sh locally with 0755 permissions and test1.conf locally with 0644 permissions
##### Expected Results:
test1.sh on target with 0755 permissions and test1.conf on target with 0644 permissions
##### Actual Results:
test1.sh and test1.conf on target with 0644 permissions.
If you were to want to maintain the current behavior for backwards compatibility, the default mode should at least be documented (with possibly a recommendation to look at the synchronize module instead). Alternatively this could be done with something like
- name: copy test
copy: src={{ item }} dest=/destination owner=root group=root mode=preserve
with_items:
- test1.sh
- test1.conf
|
True
|
copy module should preserve mode of src file when mode is not explicitly set - ##### Issue Type: Bug Report
##### Component Name: copy module
##### Ansible Version: 1.9.0.1
##### Environment: *ix
##### Summary:
When using the copy module to copy multiple files to their destination, it would be really useful if the mode of the source was preserved. This would allow copying several files in a list without explicitly setting modes for each one.
In any case, this is the expected behavior when going by the documentation (which does not specify a default for mode -- leading one to assume the mode would be preserved unless explicitly specified).
##### Steps To Reproduce:
- name: copy test
copy: src={{ item }} dest=/destination owner=root group=root
with_items:
- test1.sh
- test1.conf
with test1.sh locally with 0755 permissions and test1.conf locally with 0644 permissions
##### Expected Results:
test1.sh on target with 0755 permissions and test1.conf on target with 0644 permissions
##### Actual Results:
test1.sh and test1.conf on target with 0644 permissions.
If you were to want to maintain the current behavior for backwards compatibility, the default mode should at least be documented (with possibly a recommendation to look at the synchronize module instead). Alternatively this could be done with something like
- name: copy test
copy: src={{ item }} dest=/destination owner=root group=root mode=preserve
with_items:
- test1.sh
- test1.conf
|
main
|
copy module should preserve mode of src file when mode is not explicitly set issue type bug report component name copy module ansible version environment ix summary when using the copy module to copy multiple files to their destination it would be really useful if the mode of the source was preserved this would allow copying several files in a list without explicitly setting modes for each one in any case this is the expected behavior when going by the documentation which does not specify a default for mode leading one to assume the mode would be preserved unless explicitly specified steps to reproduce name copy test copy src item dest destination owner root group root with items sh conf with sh locally with permissions and conf locally with permissions expected results sh on target with permissions and conf on target with permissions actual results sh and conf on target with permissions if you were to want to maintain the current behavior for backwards compatibility the default mode should at least be documented with possibly a recommendation to look at the synchronize module instead alternatively this could be done with something like name copy test copy src item dest destination owner root group root mode preserve with items sh conf
| 1
|
3,623
| 14,655,279,636
|
IssuesEvent
|
2020-12-28 10:38:13
|
libgdx/libgdx
|
https://api.github.com/repos/libgdx/libgdx
|
closed
|
Extract a shared dependency for libgdx utility classes to reduce coupling
|
enhancement maintainer needed
|
I just wanted to add this promise into the tracker, where you would extract some common functionality like the Collections and Reflection helpers geared towards mobile and put them in a separate reusable project by modules like libgdx-ai in order to reduce their complete dependency on libgdx.
http://www.reddit.com/r/gamedev/comments/2u8ytc/gdxai_150_released/co6b3hx
|
True
|
Extract a shared dependency for libgdx utility classes to reduce coupling - I just wanted to add this promise into the tracker, where you would extract some common functionality like the Collections and Reflection helpers geared towards mobile and put them in a separate reusable project by modules like libgdx-ai in order to reduce their complete dependency on libgdx.
http://www.reddit.com/r/gamedev/comments/2u8ytc/gdxai_150_released/co6b3hx
|
main
|
extract a shared dependency for libgdx utility classes to reduce coupling i just wanted to add this promise into the tracker where you would extract some common functionality like the collections and reflection helpers geared towards mobile and put them in a separate reusable project by modules like libgdx ai in order to reduce their complete dependency on libgdx
| 1
|
1,255
| 5,318,059,178
|
IssuesEvent
|
2017-02-14 00:33:41
|
diofant/diofant
|
https://api.github.com/repos/diofant/diofant
|
closed
|
Drop diofant/plotting/experimental_lambdify.py
|
maintainability plotting
|
This should be replaced with standard lambdify, maybe improved.
See also sympy/sympy#11461, sympy/sympy#10925.
|
True
|
Drop diofant/plotting/experimental_lambdify.py - This should be replaced with standard lambdify, maybe improved.
See also sympy/sympy#11461, sympy/sympy#10925.
|
main
|
drop diofant plotting experimental lambdify py this should be replaced with standard lambdify maybe improved see also sympy sympy sympy sympy
| 1
|
251,902
| 8,029,226,384
|
IssuesEvent
|
2018-07-27 15:20:21
|
mozilla/addons-frontend
|
https://api.github.com/repos/mozilla/addons-frontend
|
closed
|
[tracker] Collections feature
|
component: collections component: legacy pages priority: p3 project: 2018 Q1 project: amo project: desktop pages triaged
|
**The collections feature is being tracked in [this project](https://github.com/mozilla/addons-frontend/projects/3?fullscreen=true)**
Docs / specs:
- [Collections features that we are implementing](https://docs.google.com/document/d/1cJ1k8HsAb6taauvJMDLOiYMa1McfUXNgEtax3hoToPc/edit#heading=h.z5wk3ws26iay)
- [Overview of simplified Collections](https://docs.google.com/document/d/1DutmS8yco-hNiKJ1-0MrkAkze7Ue88xpA4b-MWx6MLI/edit#)
See the following links for the separate issues.
**2017 Q3**
- [x] ~~Collection View #3140 (MVP)~~
- [x] ~~Link to edit your collection while signed in~~ https://github.com/mozilla/addons-frontend/issues/3766
**2017 Q4**
- *landed platform work needed for 2018 Q1 features*
**2018 Q1**
- [x] ~~Add an add-on to an existing collection~~ #2768
- [x] ~~Edit collection name, description, URL~~ https://github.com/mozilla/addons-frontend/issues/4002
- [ ] Edit the add-ons in a collection https://github.com/mozilla/addons-frontend/issues/4004
- [ ] Create a new collection https://github.com/mozilla/addons-frontend/issues/4003
- [ ] List my collections and edit / delete / add #3142
- [ ] Add an add-on to a new collection https://github.com/mozilla/addons-frontend/issues/3993
**Future?**
- [ ] Delete a collection https://github.com/mozilla/addons-frontend/issues/4195
- [ ] Allow contributors to maintain a collection https://github.com/mozilla/addons-frontend/issues/4000
- [ ] Show Collection comments https://github.com/mozilla/addons-frontend/issues/3890
- [ ] Sort add-ons in a collection https://github.com/mozilla/addons-frontend/issues/3991
- [ ] Install an add-on from a collection https://github.com/mozilla/addons-frontend/issues/3992
- [ ] On detail page: this add-on is featured in `<Collection>` https://github.com/mozilla/addons-frontend/issues/3994
|
1.0
|
[tracker] Collections feature - **The collections feature is being tracked in [this project](https://github.com/mozilla/addons-frontend/projects/3?fullscreen=true)**
Docs / specs:
- [Collections features that we are implementing](https://docs.google.com/document/d/1cJ1k8HsAb6taauvJMDLOiYMa1McfUXNgEtax3hoToPc/edit#heading=h.z5wk3ws26iay)
- [Overview of simplified Collections](https://docs.google.com/document/d/1DutmS8yco-hNiKJ1-0MrkAkze7Ue88xpA4b-MWx6MLI/edit#)
See the following links for the separate issues.
**2017 Q3**
- [x] ~~Collection View #3140 (MVP)~~
- [x] ~~Link to edit your collection while signed in~~ https://github.com/mozilla/addons-frontend/issues/3766
**2017 Q4**
- *landed platform work needed for 2018 Q1 features*
**2018 Q1**
- [x] ~~Add an add-on to an existing collection~~ #2768
- [x] ~~Edit collection name, description, URL~~ https://github.com/mozilla/addons-frontend/issues/4002
- [ ] Edit the add-ons in a collection https://github.com/mozilla/addons-frontend/issues/4004
- [ ] Create a new collection https://github.com/mozilla/addons-frontend/issues/4003
- [ ] List my collections and edit / delete / add #3142
- [ ] Add an add-on to a new collection https://github.com/mozilla/addons-frontend/issues/3993
**Future?**
- [ ] Delete a collection https://github.com/mozilla/addons-frontend/issues/4195
- [ ] Allow contributors to maintain a collection https://github.com/mozilla/addons-frontend/issues/4000
- [ ] Show Collection comments https://github.com/mozilla/addons-frontend/issues/3890
- [ ] Sort add-ons in a collection https://github.com/mozilla/addons-frontend/issues/3991
- [ ] Install an add-on from a collection https://github.com/mozilla/addons-frontend/issues/3992
- [ ] On detail page: this add-on is featured in `<Collection>` https://github.com/mozilla/addons-frontend/issues/3994
|
non_main
|
collections feature the collections feature is being tracked in docs specs see the following links for the separate issues collection view mvp link to edit your collection while signed in landed platform work needed for features add an add on to an existing collection edit collection name description url edit the add ons in a collection create a new collection list my collections and edit delete add add an add on to a new collection future delete a collection allow contributors to maintain a collection show collection comments sort add ons in a collection install an add on from a collection on detail page this add on is featured in
| 0
|
88,329
| 25,377,193,835
|
IssuesEvent
|
2022-11-21 14:56:10
|
pulumi/pulumi
|
https://api.github.com/repos/pulumi/pulumi
|
closed
|
Community contributions failing to merge
|
kind/bug p1 area/build
|
See: https://github.com/pulumi/pulumi/pull/11307
Bors expects a status check to complete to ensure that the PR doesn't have any reverse lints. However community contributions are failing to reach the lint step.
|
1.0
|
Community contributions failing to merge - See: https://github.com/pulumi/pulumi/pull/11307
Bors expects a status check to complete to ensure that the PR doesn't have any reverse lints. However community contributions are failing to reach the lint step.
|
non_main
|
community contributions failing to merge see bors expects a status check to complete to ensure that the pr doesn t have any reverse lints however community contributions are failing to reach the lint step
| 0
|
55,574
| 3,073,790,290
|
IssuesEvent
|
2015-08-20 00:33:05
|
RobotiumTech/robotium
|
https://api.github.com/repos/RobotiumTech/robotium
|
closed
|
can not access to remote client
|
bug imported invalid Priority-Medium
|
_From [Dev...@gmail.com](https://code.google.com/u/109568600924356269958/) on June 27, 2013 00:48:49_
What steps will reproduce the problem? 1.Run RemoteControl UI.java,a new windows showed
2.Click the button "Connect"
3.See details in the console
What is the expected output?
What do you see instead?
Forwarding port from 'local:2411' to 'device/emulator:2410'
Attempting to initialize Android Tools...
toolHome is null, can't set it.
Setting Android Tools SDK Dir to F:\android\sdk
DroidSocketProtocol.createRemoteClientConnection(): Remote Runner seems to be connected!
DroidSocketProtocol.verifyRemoteClient(): Remote client did not verify in timeout period.
DroidSocketProtocol.createRemoteClientConnection(): Remote client did NOT verify itself as a remote SocketProtocolRunner! What version of the product are you using? On what operating system? 2012.12.12,on Win7 32bit Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=477_
|
1.0
|
can not access to remote client - _From [Dev...@gmail.com](https://code.google.com/u/109568600924356269958/) on June 27, 2013 00:48:49_
What steps will reproduce the problem? 1.Run RemoteControl UI.java,a new windows showed
2.Click the button "Connect"
3.See details in the console
What is the expected output?
What do you see instead?
Forwarding port from 'local:2411' to 'device/emulator:2410'
Attempting to initialize Android Tools...
toolHome is null, can't set it.
Setting Android Tools SDK Dir to F:\android\sdk
DroidSocketProtocol.createRemoteClientConnection(): Remote Runner seems to be connected!
DroidSocketProtocol.verifyRemoteClient(): Remote client did not verify in timeout period.
DroidSocketProtocol.createRemoteClientConnection(): Remote client did NOT verify itself as a remote SocketProtocolRunner! What version of the product are you using? On what operating system? 2012.12.12,on Win7 32bit Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=477_
|
non_main
|
can not access to remote client from on june what steps will reproduce the problem run remotecontrol ui java a new windows showed click the button connect see details in the console what is the expected output what do you see instead forwarding port from local to device emulator attempting to initialize android tools toolhome is null can t set it setting android tools sdk dir to f android sdk droidsocketprotocol createremoteclientconnection remote runner seems to be connected droidsocketprotocol verifyremoteclient remote client did not verify in timeout period droidsocketprotocol createremoteclientconnection remote client did not verify itself as a remote socketprotocolrunner what version of the product are you using on what operating system on please provide any additional information below original issue
| 0
|
5,663
| 29,351,047,308
|
IssuesEvent
|
2023-05-27 00:11:10
|
Homebrew/homebrew-cask
|
https://api.github.com/repos/Homebrew/homebrew-cask
|
closed
|
Error Installing Cask calibre 6.17.0 hdiutil: attach failed - no mountable file systems
|
awaiting maintainer feedback stale
|
### Verification
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/Homebrew/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
- [X] I have retried my command with `--force`.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I have checked the instructions for [reporting bugs](https://github.com/Homebrew/homebrew-cask#reporting-bugs).
- [X] I made doubly sure this is not a [checksum does not match](https://docs.brew.sh/Common-Issues#cask---checksum-does-not-match) error.
### Description of issue
Getting a no mountable file systems error when updating to calibre 6.17.0
### Command that failed
brew install --cask calibre --force --verbose --debug
### Output of command with `--verbose --debug`
```shell
/usr/local/Homebrew/Library/Homebrew/brew.rb (Cask::CaskLoader::FromAPILoader): loading calibre
==> Cask::Installer#install
==> Printing caveats
==> Cask::Installer#fetch
==> Downloading https://download.calibre-ebook.com/6.17.0/calibre-6.17.0.dmg
/usr/bin/env /usr/local/Homebrew/Library/Homebrew/shims/shared/curl --disable --cookie /dev/null --globoff --show-error --user-agent Homebrew/4.0.15-84-g9d5b017\ \(Macintosh\;\ Intel\ Mac\ OS\ X\ 11.7.6\)\ curl/7.87.0 --header Accept-Language:\ en --retry 3 --fail --location --silent --head https://download.calibre-ebook.com/6.17.0/calibre-6.17.0.dmg
/usr/bin/env /usr/local/Homebrew/Library/Homebrew/shims/shared/curl --disable --cookie /dev/null --globoff --show-error --user-agent Homebrew/4.0.15-84-g9d5b017\ \(Macintosh\;\ Intel\ Mac\ OS\ X\ 11.7.6\)\ curl/7.87.0 --header Accept-Language:\ en --retry 3 --fail --location --silent --head --request GET https://download.calibre-ebook.com/6.17.0/calibre-6.17.0.dmg
Already downloaded: /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
==> Checking quarantine support
/usr/bin/env /usr/bin/xattr -h
/usr/bin/env /usr/bin/swift -target x86_64-apple-macosx11 /usr/local/Homebrew/Library/Homebrew/cask/utils/quarantine.swift
==> Quarantine is available.
==> Verifying Gatekeeper status of /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
/usr/bin/env /usr/bin/xattr -p com.apple.quarantine /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
==> /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg is quarantined
==> Verifying checksum for cask 'calibre'
/usr/bin/env tar --list --file /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
/usr/bin/env hdiutil imageinfo -format /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
==> Installing Cask calibre
==> Cask::Installer#stage
==> Extracting primary container
==> Using container class UnpackStrategy::Dmg for /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
/usr/bin/env hdiutil attach -plist -nobrowse -readonly -mountrandom /private/tmp/d20230427-71756-606pzh /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
/usr/bin/env hdiutil convert -format UDTO -o /private/tmp/d20230427-71756-606pzh/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.cdr /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
/usr/bin/env hdiutil attach -plist -nobrowse -readonly -mountrandom /private/tmp/d20230427-71756-606pzh /private/tmp/d20230427-71756-606pzh/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.cdr
hdiutil: attach failed - no mountable file systems
==> Purging files for version 6.17.0 of Cask calibre
Error: Failure while executing; `/usr/bin/env hdiutil attach -plist -nobrowse -readonly -mountrandom /private/tmp/d20230427-71756-606pzh /private/tmp/d20230427-71756-606pzh/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.cdr` exited with 1. Here's the output:
hdiutil: attach failed - no mountable file systems
/usr/local/Homebrew/Library/Homebrew/system_command.rb:305:in `assert_success!'
/usr/local/Homebrew/Library/Homebrew/system_command.rb:59:in `run!'
/usr/local/Homebrew/Library/Homebrew/system_command.rb:34:in `run'
/usr/local/Homebrew/Library/Homebrew/system_command.rb:38:in `run!'
/usr/local/Homebrew/Library/Homebrew/system_command.rb:26:in `system_command!'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy/dmg.rb:211:in `block in mount'
/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.10_1/lib/ruby/2.6.0/tmpdir.rb:93:in `mktmpdir'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy/dmg.rb:181:in `mount'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy/dmg.rb:171:in `extract_to_dir'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy.rb:116:in `extract'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy.rb:131:in `block in extract_nestedly'
/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.10_1/lib/ruby/2.6.0/tmpdir.rb:93:in `mktmpdir'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy.rb:128:in `extract_nestedly'
/usr/local/Homebrew/Library/Homebrew/cask/installer.rb:213:in `extract_primary_container'
/usr/local/Homebrew/Library/Homebrew/cask/installer.rb:79:in `stage'
/usr/local/Homebrew/Library/Homebrew/cask/installer.rb:107:in `install'
/usr/local/Homebrew/Library/Homebrew/cmd/install.rb:237:in `block in install'
/usr/local/Homebrew/Library/Homebrew/cmd/install.rb:228:in `each'
/usr/local/Homebrew/Library/Homebrew/cmd/install.rb:228:in `install'
/usr/local/Homebrew/Library/Homebrew/brew.rb:94:in `<main>'
```
### Output of `brew doctor` and `brew config`
```shell
Your system is ready to brew.
HOMEBREW_VERSION: 4.0.15-84-g9d5b017
ORIGIN: https://github.com/Homebrew/brew
HEAD: 9d5b017bb9ce1f9addfdd2325f28afbeb64f968a
Last commit: 3 hours ago
Core tap origin: https://github.com/Homebrew/homebrew-core
Core tap HEAD: 6524544aab85de5b5872793590bb0b8791794379
Core tap last commit: 23 minutes ago
Core tap branch: master
Core tap JSON: 27 Apr 13:57 UTC
HOMEBREW_PREFIX: /usr/local
HOMEBREW_CASK_OPTS: []
HOMEBREW_MAKE_JOBS: 4
HOMEBREW_NO_ANALYTICS: set
Homebrew Ruby: 2.6.10 => /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.10_1/bin/ruby
CPU: quad-core 64-bit haswell
Clang: 13.0.0 build 1300
Git: 2.40.1 => /usr/local/bin/git
Curl: 7.87.0 => /usr/bin/curl
macOS: 11.7.6-x86_64
CLT: 13.2.0.0.1.1638488800
Xcode: N/A
```
### Output of `brew tap`
```shell
buo/cask-upgrade
homebrew/bundle
homebrew/cask
homebrew/cask-fonts
homebrew/core
homebrew/services
```
|
True
|
Error Installing Cask calibre 6.17.0 hdiutil: attach failed - no mountable file systems - ### Verification
- [X] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/Homebrew/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
- [X] I have retried my command with `--force`.
- [X] I ran `brew update-reset && brew update` and retried my command.
- [X] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [X] I have checked the instructions for [reporting bugs](https://github.com/Homebrew/homebrew-cask#reporting-bugs).
- [X] I made doubly sure this is not a [checksum does not match](https://docs.brew.sh/Common-Issues#cask---checksum-does-not-match) error.
### Description of issue
Getting a no mountable file systems error when updating to calibre 6.17.0
### Command that failed
brew install --cask calibre --force --verbose --debug
### Output of command with `--verbose --debug`
```shell
/usr/local/Homebrew/Library/Homebrew/brew.rb (Cask::CaskLoader::FromAPILoader): loading calibre
==> Cask::Installer#install
==> Printing caveats
==> Cask::Installer#fetch
==> Downloading https://download.calibre-ebook.com/6.17.0/calibre-6.17.0.dmg
/usr/bin/env /usr/local/Homebrew/Library/Homebrew/shims/shared/curl --disable --cookie /dev/null --globoff --show-error --user-agent Homebrew/4.0.15-84-g9d5b017\ \(Macintosh\;\ Intel\ Mac\ OS\ X\ 11.7.6\)\ curl/7.87.0 --header Accept-Language:\ en --retry 3 --fail --location --silent --head https://download.calibre-ebook.com/6.17.0/calibre-6.17.0.dmg
/usr/bin/env /usr/local/Homebrew/Library/Homebrew/shims/shared/curl --disable --cookie /dev/null --globoff --show-error --user-agent Homebrew/4.0.15-84-g9d5b017\ \(Macintosh\;\ Intel\ Mac\ OS\ X\ 11.7.6\)\ curl/7.87.0 --header Accept-Language:\ en --retry 3 --fail --location --silent --head --request GET https://download.calibre-ebook.com/6.17.0/calibre-6.17.0.dmg
Already downloaded: /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
==> Checking quarantine support
/usr/bin/env /usr/bin/xattr -h
/usr/bin/env /usr/bin/swift -target x86_64-apple-macosx11 /usr/local/Homebrew/Library/Homebrew/cask/utils/quarantine.swift
==> Quarantine is available.
==> Verifying Gatekeeper status of /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
/usr/bin/env /usr/bin/xattr -p com.apple.quarantine /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
==> /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg is quarantined
==> Verifying checksum for cask 'calibre'
/usr/bin/env tar --list --file /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
/usr/bin/env hdiutil imageinfo -format /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
==> Installing Cask calibre
==> Cask::Installer#stage
==> Extracting primary container
==> Using container class UnpackStrategy::Dmg for /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
/usr/bin/env hdiutil attach -plist -nobrowse -readonly -mountrandom /private/tmp/d20230427-71756-606pzh /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
/usr/bin/env hdiutil convert -format UDTO -o /private/tmp/d20230427-71756-606pzh/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.cdr /Users/XXXXXXXX/Library/Caches/Homebrew/downloads/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.dmg
/usr/bin/env hdiutil attach -plist -nobrowse -readonly -mountrandom /private/tmp/d20230427-71756-606pzh /private/tmp/d20230427-71756-606pzh/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.cdr
hdiutil: attach failed - no mountable file systems
==> Purging files for version 6.17.0 of Cask calibre
Error: Failure while executing; `/usr/bin/env hdiutil attach -plist -nobrowse -readonly -mountrandom /private/tmp/d20230427-71756-606pzh /private/tmp/d20230427-71756-606pzh/cd6108ef823810415a19d310ea5eb578822cc6661cc9508a94e6a28eb44544de--calibre-6.17.0.cdr` exited with 1. Here's the output:
hdiutil: attach failed - no mountable file systems
/usr/local/Homebrew/Library/Homebrew/system_command.rb:305:in `assert_success!'
/usr/local/Homebrew/Library/Homebrew/system_command.rb:59:in `run!'
/usr/local/Homebrew/Library/Homebrew/system_command.rb:34:in `run'
/usr/local/Homebrew/Library/Homebrew/system_command.rb:38:in `run!'
/usr/local/Homebrew/Library/Homebrew/system_command.rb:26:in `system_command!'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy/dmg.rb:211:in `block in mount'
/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.10_1/lib/ruby/2.6.0/tmpdir.rb:93:in `mktmpdir'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy/dmg.rb:181:in `mount'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy/dmg.rb:171:in `extract_to_dir'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy.rb:116:in `extract'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy.rb:131:in `block in extract_nestedly'
/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.10_1/lib/ruby/2.6.0/tmpdir.rb:93:in `mktmpdir'
/usr/local/Homebrew/Library/Homebrew/unpack_strategy.rb:128:in `extract_nestedly'
/usr/local/Homebrew/Library/Homebrew/cask/installer.rb:213:in `extract_primary_container'
/usr/local/Homebrew/Library/Homebrew/cask/installer.rb:79:in `stage'
/usr/local/Homebrew/Library/Homebrew/cask/installer.rb:107:in `install'
/usr/local/Homebrew/Library/Homebrew/cmd/install.rb:237:in `block in install'
/usr/local/Homebrew/Library/Homebrew/cmd/install.rb:228:in `each'
/usr/local/Homebrew/Library/Homebrew/cmd/install.rb:228:in `install'
/usr/local/Homebrew/Library/Homebrew/brew.rb:94:in `<main>'
```
### Output of `brew doctor` and `brew config`
```shell
Your system is ready to brew.
HOMEBREW_VERSION: 4.0.15-84-g9d5b017
ORIGIN: https://github.com/Homebrew/brew
HEAD: 9d5b017bb9ce1f9addfdd2325f28afbeb64f968a
Last commit: 3 hours ago
Core tap origin: https://github.com/Homebrew/homebrew-core
Core tap HEAD: 6524544aab85de5b5872793590bb0b8791794379
Core tap last commit: 23 minutes ago
Core tap branch: master
Core tap JSON: 27 Apr 13:57 UTC
HOMEBREW_PREFIX: /usr/local
HOMEBREW_CASK_OPTS: []
HOMEBREW_MAKE_JOBS: 4
HOMEBREW_NO_ANALYTICS: set
Homebrew Ruby: 2.6.10 => /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.10_1/bin/ruby
CPU: quad-core 64-bit haswell
Clang: 13.0.0 build 1300
Git: 2.40.1 => /usr/local/bin/git
Curl: 7.87.0 => /usr/bin/curl
macOS: 11.7.6-x86_64
CLT: 13.2.0.0.1.1638488800
Xcode: N/A
```
### Output of `brew tap`
```shell
buo/cask-upgrade
homebrew/bundle
homebrew/cask
homebrew/cask-fonts
homebrew/core
homebrew/services
```
|
main
|
error installing cask calibre hdiutil attach failed no mountable file systems verification i understand that i have retried my command with force i ran brew update reset brew update and retried my command i ran brew doctor fixed as many issues as possible and retried my command i have checked the instructions for i made doubly sure this is not a error description of issue getting a no mountable file systems error when updating to calibre command that failed brew install cask calibre force verbose debug output of command with verbose debug shell usr local homebrew library homebrew brew rb cask caskloader fromapiloader loading calibre cask installer install printing caveats cask installer fetch downloading usr bin env usr local homebrew library homebrew shims shared curl disable cookie dev null globoff show error user agent homebrew macintosh intel mac os x curl header accept language en retry fail location silent head usr bin env usr local homebrew library homebrew shims shared curl disable cookie dev null globoff show error user agent homebrew macintosh intel mac os x curl header accept language en retry fail location silent head request get already downloaded users xxxxxxxx library caches homebrew downloads calibre dmg checking quarantine support usr bin env usr bin xattr h usr bin env usr bin swift target apple usr local homebrew library homebrew cask utils quarantine swift quarantine is available verifying gatekeeper status of users xxxxxxxx library caches homebrew downloads calibre dmg usr bin env usr bin xattr p com apple quarantine users xxxxxxxx library caches homebrew downloads calibre dmg users xxxxxxxx library caches homebrew downloads calibre dmg is quarantined verifying checksum for cask calibre usr bin env tar list file users xxxxxxxx library caches homebrew downloads calibre dmg usr bin env hdiutil imageinfo format users xxxxxxxx library caches homebrew downloads calibre dmg installing cask calibre cask installer stage extracting primary container using container class unpackstrategy dmg for users xxxxxxxx library caches homebrew downloads calibre dmg usr bin env hdiutil attach plist nobrowse readonly mountrandom private tmp users xxxxxxxx library caches homebrew downloads calibre dmg usr bin env hdiutil convert format udto o private tmp calibre cdr users xxxxxxxx library caches homebrew downloads calibre dmg usr bin env hdiutil attach plist nobrowse readonly mountrandom private tmp private tmp calibre cdr hdiutil attach failed no mountable file systems purging files for version of cask calibre error failure while executing usr bin env hdiutil attach plist nobrowse readonly mountrandom private tmp private tmp calibre cdr exited with here s the output hdiutil attach failed no mountable file systems usr local homebrew library homebrew system command rb in assert success usr local homebrew library homebrew system command rb in run usr local homebrew library homebrew system command rb in run usr local homebrew library homebrew system command rb in run usr local homebrew library homebrew system command rb in system command usr local homebrew library homebrew unpack strategy dmg rb in block in mount usr local homebrew library homebrew vendor portable ruby lib ruby tmpdir rb in mktmpdir usr local homebrew library homebrew unpack strategy dmg rb in mount usr local homebrew library homebrew unpack strategy dmg rb in extract to dir usr local homebrew library homebrew unpack strategy rb in extract usr local homebrew library homebrew unpack strategy rb in block in extract nestedly usr local homebrew library homebrew vendor portable ruby lib ruby tmpdir rb in mktmpdir usr local homebrew library homebrew unpack strategy rb in extract nestedly usr local homebrew library homebrew cask installer rb in extract primary container usr local homebrew library homebrew cask installer rb in stage usr local homebrew library homebrew cask installer rb in install usr local homebrew library homebrew cmd install rb in block in install usr local homebrew library homebrew cmd install rb in each usr local homebrew library homebrew cmd install rb in install usr local homebrew library homebrew brew rb in output of brew doctor and brew config shell your system is ready to brew homebrew version origin head last commit hours ago core tap origin core tap head core tap last commit minutes ago core tap branch master core tap json apr utc homebrew prefix usr local homebrew cask opts homebrew make jobs homebrew no analytics set homebrew ruby usr local homebrew library homebrew vendor portable ruby bin ruby cpu quad core bit haswell clang build git usr local bin git curl usr bin curl macos clt xcode n a output of brew tap shell buo cask upgrade homebrew bundle homebrew cask homebrew cask fonts homebrew core homebrew services
| 1
|
1,662
| 6,574,059,238
|
IssuesEvent
|
2017-09-11 11:17:52
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
ec2_ami module broken in Ansible 2.2
|
affects_2.2 aws bug_report cloud waiting_on_maintainer
|
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
component: ec2_ami
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
$ ansible --version
/usr/lib64/python2.6/site-packages/cryptography/__init__.py:26: DeprecationWarning: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of cryptography will drop support for Python 2.6
DeprecationWarning
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
n/a
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.7 (Santiago)
$ sudo pip list | grep -i boto
DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6
boto (2.43.0)
boto3 (1.4.2)
botocore (1.4.81)
##### SUMMARY
ec2_ami module crashes and fails to save AMI
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Set up an instance and try to save it using ec2_ami module
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Save the app server as an AMI
ec2_ami:
description: "App server - created {{ ansible_date_time.iso8601 }}"
instance_id: "{{ item.id }}"
name: "Appserver.{{ ansible_date_time.date }}.{{ ansible_date_time.hour }}.{{ ansible_date_time.minute }}.{{ ansible_date_time.second }} {{ ansible_date_time.tz_offset }} {{ scm_branch.stdout }}"
region: "{{ item.region }}"
wait: yes
launch_permissions:
user_ids: ["{{ aws_account_number_prod }}"]
tags:
git_branch: "{{ scm_branch.stdout }}"
with_items: "{{ ec2.instances }}"
register: saved_ami
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
For it to not crash.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [save_ami : Save the app server as an AMI] ******************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'Image' object has no attribute 'creationDate'
failed: [127.0.0.1] (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-10-10-0-140.eu-central-1.compute.internal', u'public_ip': u'xx.xx.xx.xx', u'private_ip': u'10.10.0.140', u'id': u'i-718cb3cd', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sdb': {u'status': u'attached', u'delete_on_termination': False, u'volume_id': u'vol-xxxxxxxx'}, u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-xxxxxxxx'}, u'/dev/sdc': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-xxxxxxxx'}}, u'key_name': u'xxxxxxxx', u'image_id': u'ami-xxxxxxxx', u'tenancy': u'default', u'groups': {u'sg-xxxxxxx': u'image_build'}, u'public_dns_name': u'', u'state_code': 16, u'tags': {u'Name': u'image_build xxxxxxxx'}, u'placement': u'eu-central-1b', u'ami_launch_index': u'0', u'dns_name': u'', u'region': u'eu-central-1', u'launch_time': u'2016-12-02T01:03:35.000Z', u'instance_type': u't2.small', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'}) => {"failed": true, "item": {"ami_launch_index": "0", "architecture": "x86_64", "block_device_mapping": {"/dev/sda1": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-xxxxxxxx"}, "/dev/sdb": {"delete_on_termination": false, "status": "attached", "volume_id": "vol-xxxxxxxx"}, "/dev/sdc": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-xxxxxxxx"}}, "dns_name": "", "ebs_optimized": false, "groups": {"sg-xxxxxxxx": "image_build"}, "hypervisor": "xen", "id": "i-718cb3cd", "image_id": "ami-xxxxxxxx", "instance_type": "t2.small", "kernel": null, "key_name": "xxxxxxxx", "launch_time": "2016-12-02T01:03:35.000Z", "placement": "eu-central-1b", "private_dns_name": "ip-10-10-0-140.eu-central-1.compute.internal", "private_ip": "10.10.0.140", "public_dns_name": "", "public_ip": "xx.xx.xx.xx", "ramdisk": null, "region": "eu-central-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "running", "state_code": 16, "tags": {"Name": "image_build"}, "tenancy": "default", "virtualization_type": "hvm"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_ault7w/ansible_module_ec2_ami.py\", line 560, in <module>\n main()\n File \"/tmp/ansible_ault7w/ansible_module_ec2_ami.py\", line 552, in main\n create_image(module, ec2)\n File \"/tmp/ansible_ault7w/ansible_module_ec2_ami.py\", line 419, in create_image\n module.exit_json(msg=\"AMI creation operation complete\", changed=True, **get_ami_info(img))\n File \"/tmp/ansible_ault7w/ansible_module_ec2_ami.py\", line 332, in get_ami_info\n creationDate=image.creationDate,\nAttributeError: 'Image' object has no attribute 'creationDate'\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```
|
True
|
ec2_ami module broken in Ansible 2.2 - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
component: ec2_ami
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
$ ansible --version
/usr/lib64/python2.6/site-packages/cryptography/__init__.py:26: DeprecationWarning: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of cryptography will drop support for Python 2.6
DeprecationWarning
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
n/a
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.7 (Santiago)
$ sudo pip list | grep -i boto
DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6
boto (2.43.0)
boto3 (1.4.2)
botocore (1.4.81)
##### SUMMARY
ec2_ami module crashes and fails to save AMI
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Set up an instance and try to save it using ec2_ami module
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Save the app server as an AMI
ec2_ami:
description: "App server - created {{ ansible_date_time.iso8601 }}"
instance_id: "{{ item.id }}"
name: "Appserver.{{ ansible_date_time.date }}.{{ ansible_date_time.hour }}.{{ ansible_date_time.minute }}.{{ ansible_date_time.second }} {{ ansible_date_time.tz_offset }} {{ scm_branch.stdout }}"
region: "{{ item.region }}"
wait: yes
launch_permissions:
user_ids: ["{{ aws_account_number_prod }}"]
tags:
git_branch: "{{ scm_branch.stdout }}"
with_items: "{{ ec2.instances }}"
register: saved_ami
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
For it to not crash.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [save_ami : Save the app server as an AMI] ******************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'Image' object has no attribute 'creationDate'
failed: [127.0.0.1] (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-10-10-0-140.eu-central-1.compute.internal', u'public_ip': u'xx.xx.xx.xx', u'private_ip': u'10.10.0.140', u'id': u'i-718cb3cd', u'ebs_optimized': False, u'state': u'running', u'virtualization_type': u'hvm', u'architecture': u'x86_64', u'ramdisk': None, u'block_device_mapping': {u'/dev/sdb': {u'status': u'attached', u'delete_on_termination': False, u'volume_id': u'vol-xxxxxxxx'}, u'/dev/sda1': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-xxxxxxxx'}, u'/dev/sdc': {u'status': u'attached', u'delete_on_termination': True, u'volume_id': u'vol-xxxxxxxx'}}, u'key_name': u'xxxxxxxx', u'image_id': u'ami-xxxxxxxx', u'tenancy': u'default', u'groups': {u'sg-xxxxxxx': u'image_build'}, u'public_dns_name': u'', u'state_code': 16, u'tags': {u'Name': u'image_build xxxxxxxx'}, u'placement': u'eu-central-1b', u'ami_launch_index': u'0', u'dns_name': u'', u'region': u'eu-central-1', u'launch_time': u'2016-12-02T01:03:35.000Z', u'instance_type': u't2.small', u'root_device_name': u'/dev/sda1', u'hypervisor': u'xen'}) => {"failed": true, "item": {"ami_launch_index": "0", "architecture": "x86_64", "block_device_mapping": {"/dev/sda1": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-xxxxxxxx"}, "/dev/sdb": {"delete_on_termination": false, "status": "attached", "volume_id": "vol-xxxxxxxx"}, "/dev/sdc": {"delete_on_termination": true, "status": "attached", "volume_id": "vol-xxxxxxxx"}}, "dns_name": "", "ebs_optimized": false, "groups": {"sg-xxxxxxxx": "image_build"}, "hypervisor": "xen", "id": "i-718cb3cd", "image_id": "ami-xxxxxxxx", "instance_type": "t2.small", "kernel": null, "key_name": "xxxxxxxx", "launch_time": "2016-12-02T01:03:35.000Z", "placement": "eu-central-1b", "private_dns_name": "ip-10-10-0-140.eu-central-1.compute.internal", "private_ip": "10.10.0.140", "public_dns_name": "", "public_ip": "xx.xx.xx.xx", "ramdisk": null, "region": "eu-central-1", "root_device_name": "/dev/sda1", "root_device_type": "ebs", "state": "running", "state_code": 16, "tags": {"Name": "image_build"}, "tenancy": "default", "virtualization_type": "hvm"}, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_ault7w/ansible_module_ec2_ami.py\", line 560, in <module>\n main()\n File \"/tmp/ansible_ault7w/ansible_module_ec2_ami.py\", line 552, in main\n create_image(module, ec2)\n File \"/tmp/ansible_ault7w/ansible_module_ec2_ami.py\", line 419, in create_image\n module.exit_json(msg=\"AMI creation operation complete\", changed=True, **get_ami_info(img))\n File \"/tmp/ansible_ault7w/ansible_module_ec2_ami.py\", line 332, in get_ami_info\n creationDate=image.creationDate,\nAttributeError: 'Image' object has no attribute 'creationDate'\n", "module_stdout": "", "msg": "MODULE FAILURE"}
```
|
main
|
ami module broken in ansible issue type bug report component name component ami ansible version ansible version usr site packages cryptography init py deprecationwarning python is no longer supported by the python core team please upgrade your python a future version of cryptography will drop support for python deprecationwarning ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific cat etc redhat release red hat enterprise linux server release santiago sudo pip list grep i boto deprecation python is no longer supported by the python core team please upgrade your python a future version of pip will drop support for python boto botocore summary ami module crashes and fails to save ami steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used set up an instance and try to save it using ami module name save the app server as an ami ami description app server created ansible date time instance id item id name appserver ansible date time date ansible date time hour ansible date time minute ansible date time second ansible date time tz offset scm branch stdout region item region wait yes launch permissions user ids tags git branch scm branch stdout with items instances register saved ami expected results for it to not crash actual results task an exception occurred during task execution to see the full traceback use vvv the error was attributeerror image object has no attribute creationdate failed item u kernel none u root device type u ebs u private dns name u ip eu central compute internal u public ip u xx xx xx xx u private ip u u id u i u ebs optimized false u state u running u virtualization type u hvm u architecture u u ramdisk none u block device mapping u dev sdb u status u attached u delete on termination false u volume id u vol xxxxxxxx u dev u status u attached u delete on termination true u volume id u vol xxxxxxxx u dev sdc u status u attached u delete on termination true u volume id u vol xxxxxxxx u key name u xxxxxxxx u image id u ami xxxxxxxx u tenancy u default u groups u sg xxxxxxx u image build u public dns name u u state code u tags u name u image build xxxxxxxx u placement u eu central u ami launch index u u dns name u u region u eu central u launch time u u instance type u small u root device name u dev u hypervisor u xen failed true item ami launch index architecture block device mapping dev delete on termination true status attached volume id vol xxxxxxxx dev sdb delete on termination false status attached volume id vol xxxxxxxx dev sdc delete on termination true status attached volume id vol xxxxxxxx dns name ebs optimized false groups sg xxxxxxxx image build hypervisor xen id i image id ami xxxxxxxx instance type small kernel null key name xxxxxxxx launch time placement eu central private dns name ip eu central compute internal private ip public dns name public ip xx xx xx xx ramdisk null region eu central root device name dev root device type ebs state running state code tags name image build tenancy default virtualization type hvm module stderr traceback most recent call last n file tmp ansible ansible module ami py line in n main n file tmp ansible ansible module ami py line in main n create image module n file tmp ansible ansible module ami py line in create image n module exit json msg ami creation operation complete changed true get ami info img n file tmp ansible ansible module ami py line in get ami info n creationdate image creationdate nattributeerror image object has no attribute creationdate n module stdout msg module failure
| 1
|
3,552
| 14,099,735,972
|
IssuesEvent
|
2020-11-06 02:12:30
|
alacritty/alacritty
|
https://api.github.com/repos/alacritty/alacritty
|
closed
|
Consider setting up editorconfig
|
C - waiting on maintainer enhancement
|
While rustfmt prevents us from spending our time on catching things like wrong indention or mixed indention, it's unclear what we're expecting for our shaders, etc, since rustfmt doesn't work on them, thus I think it'll be nice to have `.editorconfig` which could cover them, as well as rust itself. A lot of editors, like vim (required plugging though) can automatically pick them up and apply respectful settings and its syntax is pretty clear.
|
True
|
Consider setting up editorconfig - While rustfmt prevents us from spending our time on catching things like wrong indention or mixed indention, it's unclear what we're expecting for our shaders, etc, since rustfmt doesn't work on them, thus I think it'll be nice to have `.editorconfig` which could cover them, as well as rust itself. A lot of editors, like vim (required plugging though) can automatically pick them up and apply respectful settings and its syntax is pretty clear.
|
main
|
consider setting up editorconfig while rustfmt prevents us from spending our time on catching things like wrong indention or mixed indention it s unclear what we re expecting for our shaders etc since rustfmt doesn t work on them thus i think it ll be nice to have editorconfig which could cover them as well as rust itself a lot of editors like vim required plugging though can automatically pick them up and apply respectful settings and its syntax is pretty clear
| 1
|
4,891
| 25,102,909,178
|
IssuesEvent
|
2022-11-08 14:48:10
|
bazelbuild/intellij
|
https://api.github.com/repos/bazelbuild/intellij
|
closed
|
can't debug on macos using clion
|
type: bug product: CLion os: macos lang: c++ topic: debugging awaiting-maintainer
|
**details**
i built a simple project < [example](https://github.com/dwtj/clwb-bugs-example) > successfully and i could run it directly , but i can't debug it
clion just stuck on debug state and disable my breakpoints,

i tried to add some flags to command line , but it's not working
-c dbg --spawn_strategy=standalone --strategy_regexp=^Linking=local

**my environment:**
xcode : 12.0
clion: v2019.3.6
bazel plugin : v2020.07.13
bazel : release 3.5.0
|
True
|
can't debug on macos using clion - **details**
i built a simple project < [example](https://github.com/dwtj/clwb-bugs-example) > successfully and i could run it directly , but i can't debug it
clion just stuck on debug state and disable my breakpoints,

i tried to add some flags to command line , but it's not working
-c dbg --spawn_strategy=standalone --strategy_regexp=^Linking=local

**my environment:**
xcode : 12.0
clion: v2019.3.6
bazel plugin : v2020.07.13
bazel : release 3.5.0
|
main
|
can t debug on macos using clion details i built a simple project successfully and i could run it directly but i can t debug it clion just stuck on debug state and disable my breakpoints i tried to add some flags to command line but it s not working c dbg spawn strategy standalone strategy regexp linking local my environment xcode clion bazel plugin bazel release
| 1
|
528,410
| 15,365,910,387
|
IssuesEvent
|
2021-03-02 00:28:37
|
magento/magento2
|
https://api.github.com/repos/magento/magento2
|
closed
|
Credit Memo - doesn't recalculate tax
|
Component: Sales Component: Tax Issue: Confirmed Issue: ready for confirmation Priority: P1 Progress: done Reproduced on 2.4.x Severity: S1
|
If an order is placed for $65.00 (incl tax) and we want to refund the customer $5. We set adjustment fee to $60.00 ($5 refunded).
The totals appear correct, but the tax on the credit memo is the tax from original order, not the tax on the refund.
The tax needs to be recalculated for credit memo, as we must be able to refund tax properly.
related issue:
https://github.com/magento/magento2/issues/7937
(closed - even though this issue still exists)
### Preconditions (*)
1. magento 2.3.2, latest version 2.4
2. ubuntu 19.04
3. php 7.2
### Steps to reproduce (*)
1. place an order with tax

($10 is relevant further down)
2. go into admin and credit memo the order
3. set adjustment fee to new total for the order (original total - refund amount)
4. view credit memo created

How can the tax be $10 on a $5 refund? This needs to be fixed to recalculate tax on the refund.
### Expected result (*)
<!--- Tell us what do you expect to happen. -->
1. Tax is recalculated upon creating credit memo
### Actual result (*)
<!--- Tell us what happened instead. Include error messages and issues. -->
1.Tax is not recalculated upon creating credit memo
|
1.0
|
Credit Memo - doesn't recalculate tax - If an order is placed for $65.00 (incl tax) and we want to refund the customer $5. We set adjustment fee to $60.00 ($5 refunded).
The totals appear correct, but the tax on the credit memo is the tax from original order, not the tax on the refund.
The tax needs to be recalculated for credit memo, as we must be able to refund tax properly.
related issue:
https://github.com/magento/magento2/issues/7937
(closed - even though this issue still exists)
### Preconditions (*)
1. magento 2.3.2, latest version 2.4
2. ubuntu 19.04
3. php 7.2
### Steps to reproduce (*)
1. place an order with tax

($10 is relevant further down)
2. go into admin and credit memo the order
3. set adjustment fee to new total for the order (original total - refund amount)
4. view credit memo created

How can the tax be $10 on a $5 refund? This needs to be fixed to recalculate tax on the refund.
### Expected result (*)
<!--- Tell us what do you expect to happen. -->
1. Tax is recalculated upon creating credit memo
### Actual result (*)
<!--- Tell us what happened instead. Include error messages and issues. -->
1.Tax is not recalculated upon creating credit memo
|
non_main
|
credit memo doesn t recalculate tax if an order is placed for incl tax and we want to refund the customer we set adjustment fee to refunded the totals appear correct but the tax on the credit memo is the tax from original order not the tax on the refund the tax needs to be recalculated for credit memo as we must be able to refund tax properly related issue closed even though this issue still exists preconditions magento latest version ubuntu php steps to reproduce place an order with tax is relevant further down go into admin and credit memo the order set adjustment fee to new total for the order original total refund amount view credit memo created how can the tax be on a refund this needs to be fixed to recalculate tax on the refund expected result tax is recalculated upon creating credit memo actual result tax is not recalculated upon creating credit memo
| 0
|
5,504
| 27,487,079,296
|
IssuesEvent
|
2023-03-04 06:52:34
|
keptn/community
|
https://api.github.com/repos/keptn/community
|
closed
|
New Maintainer: @AnaMMedina21
|
membership:maintainer community
|
As a GC member @AnaMMedina21 has been approved as a maintainer with the votes of
+1 @AloisReitbauer
+1 @thisthat
+1 @thschue
Checklist:
* [x] Create PR to add @AnaMMedina21 to [MAINTAINERS](https://github.com/keptn/keptn/blob/master/MAINTAINERS)
* [x] Add @AnaMMedina21 as maintainer in the CNCF (https://github.com/cncf/foundation/pull/513)
* [x] Merge PR https://github.com/keptn/keptn/pull/9515
* [x] Add @AnaMMedina21 as an owner to the GitHub Organization and to the maintainers group (@thisthat)
|
True
|
New Maintainer: @AnaMMedina21 - As a GC member @AnaMMedina21 has been approved as a maintainer with the votes of
+1 @AloisReitbauer
+1 @thisthat
+1 @thschue
Checklist:
* [x] Create PR to add @AnaMMedina21 to [MAINTAINERS](https://github.com/keptn/keptn/blob/master/MAINTAINERS)
* [x] Add @AnaMMedina21 as maintainer in the CNCF (https://github.com/cncf/foundation/pull/513)
* [x] Merge PR https://github.com/keptn/keptn/pull/9515
* [x] Add @AnaMMedina21 as an owner to the GitHub Organization and to the maintainers group (@thisthat)
|
main
|
new maintainer as a gc member has been approved as a maintainer with the votes of aloisreitbauer thisthat thschue checklist create pr to add to add as maintainer in the cncf merge pr add as an owner to the github organization and to the maintainers group thisthat
| 1
|
4,419
| 22,744,001,862
|
IssuesEvent
|
2022-07-07 07:31:47
|
cloverhearts/quilljs-markdown
|
https://api.github.com/repos/cloverhearts/quilljs-markdown
|
closed
|
add QuillMarkdown to quill.modules{}
|
WILL MAKE IT NICE IDEA Saw with Maintainer READY FOR MERGE
|
I see in [your examples](https://cloverhearts.github.io/quilljs-markdown/) that you enable QuillMarkdown this way `new QuillMarkdown(editor, markdownOptions)`, the standard method to enable it like this
```
Quill.register('modules/QuillMarkdown',QuillMarkdown,true)
new quill('#editor', {
modules: {
QuillMarkdown: { /*options*/ }
}
});
```
I tested it this way and it works
|
True
|
add QuillMarkdown to quill.modules{} - I see in [your examples](https://cloverhearts.github.io/quilljs-markdown/) that you enable QuillMarkdown this way `new QuillMarkdown(editor, markdownOptions)`, the standard method to enable it like this
```
Quill.register('modules/QuillMarkdown',QuillMarkdown,true)
new quill('#editor', {
modules: {
QuillMarkdown: { /*options*/ }
}
});
```
I tested it this way and it works
|
main
|
add quillmarkdown to quill modules i see in that you enable quillmarkdown this way new quillmarkdown editor markdownoptions the standard method to enable it like this quill register modules quillmarkdown quillmarkdown true new quill editor modules quillmarkdown options i tested it this way and it works
| 1
|
33,603
| 16,044,155,742
|
IssuesEvent
|
2021-04-22 11:42:44
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
Possible performance issue when partition number is large
|
type/performance
|
https://github.com/pingcap/tidb/blob/master/planner/core/logical_plan_builder.go#L2456 will iterator all partitions for each statement, if the statement is simple and the partition number is large like 1024 the cost here is not trivial.
|
True
|
Possible performance issue when partition number is large - https://github.com/pingcap/tidb/blob/master/planner/core/logical_plan_builder.go#L2456 will iterator all partitions for each statement, if the statement is simple and the partition number is large like 1024 the cost here is not trivial.
|
non_main
|
possible performance issue when partition number is large will iterator all partitions for each statement if the statement is simple and the partition number is large like the cost here is not trivial
| 0
|
405,978
| 11,885,563,043
|
IssuesEvent
|
2020-03-27 19:52:48
|
NuGet/Home
|
https://api.github.com/repos/NuGet/Home
|
closed
|
Nomination errors, such as bad data (version, tfm etc), are not propagated to Visual Studio.
|
Area:Logging Area:Restore Priority:2 Style:PackageReference Type:Bug
|
Take this project;
```
<Project>
<Import Project="Sdk.props" Sdk="Microsoft.NET.Sdk" />
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFrameworks>net472</TargetFrameworks>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="NuGet.Common" Version="blabla" />
</ItemGroup>
<Import Project="Sdk.targets" Sdk="Microsoft.NET.Sdk" />
</Project>
```
Restore from the commandline.
The output contains something like the below:
```
Microsoft (R) Build Engine version 16.0.360-preview+g9781d96883 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.
Build started 1/22/2019 2:23:03 PM.
Project "C:\Users\nikolev.REDMOND\Source\Repos\ConsoleApp51\ConsoleApp51\ConsoleApp51.csproj" on node 1 (Restore target(s)).
C:\Program Files (x86)\Microsoft Visual Studio\2019\IntPreview\Common7\IDE\CommonExtensions\Microsoft\NuGet\NuGet.targets(119,5): error : 'blabla' is not a valid version string. [C:\Users\nikolev.REDMOND\Source\Repos\ConsoleApp51\ConsoleApp51\ConsoleApp51.csproj]
Done Building Project "C:\Users\nikolev.REDMOND\Source\Repos\ConsoleApp51\ConsoleApp51\ConsoleApp51.csproj" (Restore target(s)) -- FAILED.
Build FAILED.
"C:\Users\nikolev.REDMOND\Source\Repos\ConsoleApp51\ConsoleApp51\ConsoleApp51.csproj" (Restore target) (1) ->
(Restore target) ->
C:\Program Files (x86)\Microsoft Visual Studio\2019\IntPreview\Common7\IDE\CommonExtensions\Microsoft\NuGet\NuGet.targets(119,5): error : 'blabla' is not a valid version string. [C:\Users\nikolev.REDMOND\Source\Repos\ConsoleApp51\ConsoleApp51\ConsoleApp51.csproj]
0 Warning(s)
1 Error(s)
```
Restore in VS.
Nothing is seen in the error list.
The output is something like:
```
All packages are already installed and there is nothing to restore.
Time Elapsed: 00:00:00.1668614
========== Finished ==========
```
What's happening here is that the restore is being skipped because the package spec could not be generated properly.
//cc @rrelyea
This will become even more apparent when we add the exact version range checks for PackageDownload
|
1.0
|
Nomination errors, such as bad data (version, tfm etc), are not propagated to Visual Studio. - Take this project;
```
<Project>
<Import Project="Sdk.props" Sdk="Microsoft.NET.Sdk" />
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFrameworks>net472</TargetFrameworks>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="NuGet.Common" Version="blabla" />
</ItemGroup>
<Import Project="Sdk.targets" Sdk="Microsoft.NET.Sdk" />
</Project>
```
Restore from the commandline.
The output contains something like the below:
```
Microsoft (R) Build Engine version 16.0.360-preview+g9781d96883 for .NET Framework
Copyright (C) Microsoft Corporation. All rights reserved.
Build started 1/22/2019 2:23:03 PM.
Project "C:\Users\nikolev.REDMOND\Source\Repos\ConsoleApp51\ConsoleApp51\ConsoleApp51.csproj" on node 1 (Restore target(s)).
C:\Program Files (x86)\Microsoft Visual Studio\2019\IntPreview\Common7\IDE\CommonExtensions\Microsoft\NuGet\NuGet.targets(119,5): error : 'blabla' is not a valid version string. [C:\Users\nikolev.REDMOND\Source\Repos\ConsoleApp51\ConsoleApp51\ConsoleApp51.csproj]
Done Building Project "C:\Users\nikolev.REDMOND\Source\Repos\ConsoleApp51\ConsoleApp51\ConsoleApp51.csproj" (Restore target(s)) -- FAILED.
Build FAILED.
"C:\Users\nikolev.REDMOND\Source\Repos\ConsoleApp51\ConsoleApp51\ConsoleApp51.csproj" (Restore target) (1) ->
(Restore target) ->
C:\Program Files (x86)\Microsoft Visual Studio\2019\IntPreview\Common7\IDE\CommonExtensions\Microsoft\NuGet\NuGet.targets(119,5): error : 'blabla' is not a valid version string. [C:\Users\nikolev.REDMOND\Source\Repos\ConsoleApp51\ConsoleApp51\ConsoleApp51.csproj]
0 Warning(s)
1 Error(s)
```
Restore in VS.
Nothing is seen in the error list.
The output is something like:
```
All packages are already installed and there is nothing to restore.
Time Elapsed: 00:00:00.1668614
========== Finished ==========
```
What's happening here is that the restore is being skipped because the package spec could not be generated properly.
//cc @rrelyea
This will become even more apparent when we add the exact version range checks for PackageDownload
|
non_main
|
nomination errors such as bad data version tfm etc are not propagated to visual studio take this project exe restore from the commandline the output contains something like the below microsoft r build engine version preview for net framework copyright c microsoft corporation all rights reserved build started pm project c users nikolev redmond source repos csproj on node restore target s c program files microsoft visual studio intpreview ide commonextensions microsoft nuget nuget targets error blabla is not a valid version string done building project c users nikolev redmond source repos csproj restore target s failed build failed c users nikolev redmond source repos csproj restore target restore target c program files microsoft visual studio intpreview ide commonextensions microsoft nuget nuget targets error blabla is not a valid version string warning s error s restore in vs nothing is seen in the error list the output is something like all packages are already installed and there is nothing to restore time elapsed finished what s happening here is that the restore is being skipped because the package spec could not be generated properly cc rrelyea this will become even more apparent when we add the exact version range checks for packagedownload
| 0
|
217,729
| 7,327,782,240
|
IssuesEvent
|
2018-03-04 14:17:22
|
Radarr/Radarr
|
https://api.github.com/repos/Radarr/Radarr
|
closed
|
Feature Request: Backlog Search (not RSS)
|
enhancement priority:medium under investigation
|
Add the option to periodically run a search for movies that are in the watched list, but don't meet the existing quality cutoff.
|
1.0
|
Feature Request: Backlog Search (not RSS) - Add the option to periodically run a search for movies that are in the watched list, but don't meet the existing quality cutoff.
|
non_main
|
feature request backlog search not rss add the option to periodically run a search for movies that are in the watched list but don t meet the existing quality cutoff
| 0
|
364,929
| 25,512,630,256
|
IssuesEvent
|
2022-11-28 14:11:55
|
CSCI-GA-2820-FA22-001/promotions
|
https://api.github.com/repos/CSCI-GA-2820-FA22-001/promotions
|
opened
|
Refactor the ID endpoint using RestPLUS
|
documentation
|
**As a** developer
**I need** implement Flask-RESTPlus
**So that** I could generate Swagger Doc with less effort
### Details and Assumptions
* Refactor the endpoints by following the layout outlined in lab-flask-restplus-swagger repo
Endpoints:
1. Retrieving a promotion
2. Deleting a promotion
### Acceptance Criteria
```gherkin
Test coverage for refactored endpoints should be above 95%
```
|
1.0
|
Refactor the ID endpoint using RestPLUS - **As a** developer
**I need** implement Flask-RESTPlus
**So that** I could generate Swagger Doc with less effort
### Details and Assumptions
* Refactor the endpoints by following the layout outlined in lab-flask-restplus-swagger repo
Endpoints:
1. Retrieving a promotion
2. Deleting a promotion
### Acceptance Criteria
```gherkin
Test coverage for refactored endpoints should be above 95%
```
|
non_main
|
refactor the id endpoint using restplus as a developer i need implement flask restplus so that i could generate swagger doc with less effort details and assumptions refactor the endpoints by following the layout outlined in lab flask restplus swagger repo endpoints retrieving a promotion deleting a promotion acceptance criteria gherkin test coverage for refactored endpoints should be above
| 0
|
4,955
| 2,610,162,375
|
IssuesEvent
|
2015-02-26 18:51:29
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
closed
|
AT-TE
|
auto-migrated Priority-Medium Type-Defect
|
```
Increase Health...
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 26 Feb 2011 at 7:06
|
1.0
|
AT-TE - ```
Increase Health...
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 26 Feb 2011 at 7:06
|
non_main
|
at te increase health original issue reported on code google com by gmail com on feb at
| 0
|
4,773
| 24,587,756,654
|
IssuesEvent
|
2022-10-13 21:28:32
|
mozilla/foundation.mozilla.org
|
https://api.github.com/repos/mozilla/foundation.mozilla.org
|
closed
|
index page Queryset iteration can fail
|
engineering backend Maintain
|
### Describe the bug
```
Feb 23 19:35:38 foundation-mozilla-org app/web.3 Internal Server Error: /en/blog/category/mozilla-festival/
Feb 23 19:35:38 foundation-mozilla-org app/web.3 Traceback (most recent call last):
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner
Feb 23 19:35:38 foundation-mozilla-org app/web.3 response = get_response(request)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response
Feb 23 19:35:38 foundation-mozilla-org app/web.3 response = self.process_exception_by_middleware(e, request)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response
Feb 23 19:35:38 foundation-mozilla-org app/web.3 response = wrapped_callback(request, *callback_args, **callback_kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/sentry_sdk/integrations/django/views.py", line 67, in sentry_wrapped_callback
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return callback(request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/contextlib.py", line 74, in inner
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return func(*args, **kwds)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/wagtail/core/views.py", line 24, in serve
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return page.serve(request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/wagtail/contrib/routable_page/models.py", line 121, in serve
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return view(request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py", line 159, in entries_by_category
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return IndexPage.serve(self, request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/wagtail/contrib/routable_page/models.py", line 120, in serve
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return super().serve(request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/wagtail/core/models.py", line 1553, in serve
Feb 23 19:35:38 foundation-mozilla-org app/web.3 self.get_context(request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/index.py", line 69, in get_context
Feb 23 19:35:38 foundation-mozilla-org app/web.3 entries = self.get_entries(context)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/index.py", line 87, in get_entries
Feb 23 19:35:38 foundation-mozilla-org app/web.3 entries = self.filter_entries(entries, context)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py", line 76, in filter_entries
Feb 23 19:35:38 foundation-mozilla-org app/web.3 entries = self.filter_entries_for_category(entries, context)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py", line 99, in filter_entries_for_category
Feb 23 19:35:38 foundation-mozilla-org app/web.3 entry in entries.specific()
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py", line 103, in <listcomp>
Feb 23 19:35:38 foundation-mozilla-org app/web.3 category in entry.category.all()
Feb 23 19:35:38 foundation-mozilla-org app/web.3 TypeError: argument of type 'QuerySet' is not iterable
```
### To Reproduce
Literally no idea: it happens every so often, but when I check the associated URL, it works fine. Searching the web reveals that this isn't actually the QuerySet not being iterable, but the iterator throwing an error internally, which is caught by Django as a "QuerySet is not iterable" error.
See https://code.djangoproject.com/ticket/26600
### Expected behavior
No errors. This should jsut work
### Solutions
We should at the very least wrap the code involved with a try/except so that we can manage the response. Presumably folks are getting a "Server error 500" instead of normal-ish page, which should be avoided.
|
True
|
index page Queryset iteration can fail - ### Describe the bug
```
Feb 23 19:35:38 foundation-mozilla-org app/web.3 Internal Server Error: /en/blog/category/mozilla-festival/
Feb 23 19:35:38 foundation-mozilla-org app/web.3 Traceback (most recent call last):
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/django/core/handlers/exception.py", line 34, in inner
Feb 23 19:35:38 foundation-mozilla-org app/web.3 response = get_response(request)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/django/core/handlers/base.py", line 115, in _get_response
Feb 23 19:35:38 foundation-mozilla-org app/web.3 response = self.process_exception_by_middleware(e, request)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/django/core/handlers/base.py", line 113, in _get_response
Feb 23 19:35:38 foundation-mozilla-org app/web.3 response = wrapped_callback(request, *callback_args, **callback_kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/sentry_sdk/integrations/django/views.py", line 67, in sentry_wrapped_callback
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return callback(request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/contextlib.py", line 74, in inner
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return func(*args, **kwds)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/wagtail/core/views.py", line 24, in serve
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return page.serve(request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/wagtail/contrib/routable_page/models.py", line 121, in serve
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return view(request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py", line 159, in entries_by_category
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return IndexPage.serve(self, request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/wagtail/contrib/routable_page/models.py", line 120, in serve
Feb 23 19:35:38 foundation-mozilla-org app/web.3 return super().serve(request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/.heroku/python/lib/python3.7/site-packages/wagtail/core/models.py", line 1553, in serve
Feb 23 19:35:38 foundation-mozilla-org app/web.3 self.get_context(request, *args, **kwargs)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/index.py", line 69, in get_context
Feb 23 19:35:38 foundation-mozilla-org app/web.3 entries = self.get_entries(context)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/index.py", line 87, in get_entries
Feb 23 19:35:38 foundation-mozilla-org app/web.3 entries = self.filter_entries(entries, context)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py", line 76, in filter_entries
Feb 23 19:35:38 foundation-mozilla-org app/web.3 entries = self.filter_entries_for_category(entries, context)
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py", line 99, in filter_entries_for_category
Feb 23 19:35:38 foundation-mozilla-org app/web.3 entry in entries.specific()
Feb 23 19:35:38 foundation-mozilla-org app/web.3 File "/app/network-api/networkapi/wagtailpages/pagemodels/blog/blog_index.py", line 103, in <listcomp>
Feb 23 19:35:38 foundation-mozilla-org app/web.3 category in entry.category.all()
Feb 23 19:35:38 foundation-mozilla-org app/web.3 TypeError: argument of type 'QuerySet' is not iterable
```
### To Reproduce
Literally no idea: it happens every so often, but when I check the associated URL, it works fine. Searching the web reveals that this isn't actually the QuerySet not being iterable, but the iterator throwing an error internally, which is caught by Django as a "QuerySet is not iterable" error.
See https://code.djangoproject.com/ticket/26600
### Expected behavior
No errors. This should jsut work
### Solutions
We should at the very least wrap the code involved with a try/except so that we can manage the response. Presumably folks are getting a "Server error 500" instead of normal-ish page, which should be avoided.
|
main
|
index page queryset iteration can fail describe the bug feb foundation mozilla org app web internal server error en blog category mozilla festival feb foundation mozilla org app web traceback most recent call last feb foundation mozilla org app web file app heroku python lib site packages django core handlers exception py line in inner feb foundation mozilla org app web response get response request feb foundation mozilla org app web file app heroku python lib site packages django core handlers base py line in get response feb foundation mozilla org app web response self process exception by middleware e request feb foundation mozilla org app web file app heroku python lib site packages django core handlers base py line in get response feb foundation mozilla org app web response wrapped callback request callback args callback kwargs feb foundation mozilla org app web file app heroku python lib site packages sentry sdk integrations django views py line in sentry wrapped callback feb foundation mozilla org app web return callback request args kwargs feb foundation mozilla org app web file app heroku python lib contextlib py line in inner feb foundation mozilla org app web return func args kwds feb foundation mozilla org app web file app heroku python lib site packages wagtail core views py line in serve feb foundation mozilla org app web return page serve request args kwargs feb foundation mozilla org app web file app heroku python lib site packages wagtail contrib routable page models py line in serve feb foundation mozilla org app web return view request args kwargs feb foundation mozilla org app web file app network api networkapi wagtailpages pagemodels blog blog index py line in entries by category feb foundation mozilla org app web return indexpage serve self request args kwargs feb foundation mozilla org app web file app heroku python lib site packages wagtail contrib routable page models py line in serve feb foundation mozilla org app web return super serve request args kwargs feb foundation mozilla org app web file app heroku python lib site packages wagtail core models py line in serve feb foundation mozilla org app web self get context request args kwargs feb foundation mozilla org app web file app network api networkapi wagtailpages pagemodels index py line in get context feb foundation mozilla org app web entries self get entries context feb foundation mozilla org app web file app network api networkapi wagtailpages pagemodels index py line in get entries feb foundation mozilla org app web entries self filter entries entries context feb foundation mozilla org app web file app network api networkapi wagtailpages pagemodels blog blog index py line in filter entries feb foundation mozilla org app web entries self filter entries for category entries context feb foundation mozilla org app web file app network api networkapi wagtailpages pagemodels blog blog index py line in filter entries for category feb foundation mozilla org app web entry in entries specific feb foundation mozilla org app web file app network api networkapi wagtailpages pagemodels blog blog index py line in feb foundation mozilla org app web category in entry category all feb foundation mozilla org app web typeerror argument of type queryset is not iterable to reproduce literally no idea it happens every so often but when i check the associated url it works fine searching the web reveals that this isn t actually the queryset not being iterable but the iterator throwing an error internally which is caught by django as a queryset is not iterable error see expected behavior no errors this should jsut work solutions we should at the very least wrap the code involved with a try except so that we can manage the response presumably folks are getting a server error instead of normal ish page which should be avoided
| 1
|
73,341
| 9,660,465,470
|
IssuesEvent
|
2019-05-20 15:32:35
|
ampproject/amphtml
|
https://api.github.com/repos/ampproject/amphtml
|
opened
|
Update / delete `build-system/SERVING.md`
|
Category: Tooling Related to: Documentation
|
The [How AMP HTML is deployed](https://github.com/ampproject/amphtml/blob/master/build-system/SERVING.md) article was written 3 years ago, and is outdated.
Once the new release workflow is in place, this file should either be updated or deleted.
|
1.0
|
Update / delete `build-system/SERVING.md` - The [How AMP HTML is deployed](https://github.com/ampproject/amphtml/blob/master/build-system/SERVING.md) article was written 3 years ago, and is outdated.
Once the new release workflow is in place, this file should either be updated or deleted.
|
non_main
|
update delete build system serving md the article was written years ago and is outdated once the new release workflow is in place this file should either be updated or deleted
| 0
|
1,700
| 6,574,386,170
|
IssuesEvent
|
2017-09-11 12:42:01
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
rds param group module to enable logging
|
affects_2.3 bug_report waiting_on_maintainer
|
There is no option to enable logging by default
##### ISSUE TYPE
- Bug Report
- Feature Idea
- Documentation Report
##### COMPONENT NAME
rds_param_group_module
##### SUMMARY
There is no option to change logging behaviour on RDSs
|
True
|
rds param group module to enable logging - There is no option to enable logging by default
##### ISSUE TYPE
- Bug Report
- Feature Idea
- Documentation Report
##### COMPONENT NAME
rds_param_group_module
##### SUMMARY
There is no option to change logging behaviour on RDSs
|
main
|
rds param group module to enable logging there is no option to enable logging by default issue type bug report feature idea documentation report component name rds param group module summary there is no option to change logging behaviour on rdss
| 1
|
37,134
| 2,815,753,747
|
IssuesEvent
|
2015-05-19 07:25:19
|
HGustavs/LenaSYS
|
https://api.github.com/repos/HGustavs/LenaSYS
|
closed
|
Answer/Parameter not working for duggor
|
DuggaSys lowPriority
|
The format used for the parameter and answer when you create a dugga is no longer legal, e.g "danswer":"00000010 0 2". Quotation marks are no longer usable due to safety reasons so a workaround for this has to be created.
|
1.0
|
Answer/Parameter not working for duggor - The format used for the parameter and answer when you create a dugga is no longer legal, e.g "danswer":"00000010 0 2". Quotation marks are no longer usable due to safety reasons so a workaround for this has to be created.
|
non_main
|
answer parameter not working for duggor the format used for the parameter and answer when you create a dugga is no longer legal e g danswer quotation marks are no longer usable due to safety reasons so a workaround for this has to be created
| 0
|
438,576
| 12,641,512,935
|
IssuesEvent
|
2020-06-16 06:20:36
|
buddyboss/buddyboss-platform
|
https://api.github.com/repos/buddyboss/buddyboss-platform
|
closed
|
Activity feed: showing only the Activities for the logged-in Account
|
bug component: activity priority: high
|
**Describe the bug**
If a User or Admin is logged in, he can only see the Activities he's done on the feed (example: posting in a Group or replying to a discussion).
When you log out, you can see ALL USERS activity.
**To Reproduce**
Steps to reproduce the behavior:
1. Logged in as a User
2. Post something on your Group discussion
3. Go to activity feed, you can see only see activities you have done on the feed
4. Log out and view the feed again, you can see Activities from all users
5. Log in as a different user, you could not see the feed from other users
**Expected behavior**
Even if a user is logged in or not, they must have the capability to view the activity of users
**Screenshots**
https://drive.google.com/open?id=1s82R6t7wQmGhh6aqpWXh2Pm5lwA6_wrq
or
https://www.loom.com/share/b0b5e3fd0e824e22bbe7ba9b707b8ed9
**Support ticket links**
https://secure.helpscout.net/conversation/1186451388/76405
|
1.0
|
Activity feed: showing only the Activities for the logged-in Account - **Describe the bug**
If a User or Admin is logged in, he can only see the Activities he's done on the feed (example: posting in a Group or replying to a discussion).
When you log out, you can see ALL USERS activity.
**To Reproduce**
Steps to reproduce the behavior:
1. Logged in as a User
2. Post something on your Group discussion
3. Go to activity feed, you can see only see activities you have done on the feed
4. Log out and view the feed again, you can see Activities from all users
5. Log in as a different user, you could not see the feed from other users
**Expected behavior**
Even if a user is logged in or not, they must have the capability to view the activity of users
**Screenshots**
https://drive.google.com/open?id=1s82R6t7wQmGhh6aqpWXh2Pm5lwA6_wrq
or
https://www.loom.com/share/b0b5e3fd0e824e22bbe7ba9b707b8ed9
**Support ticket links**
https://secure.helpscout.net/conversation/1186451388/76405
|
non_main
|
activity feed showing only the activities for the logged in account describe the bug if a user or admin is logged in he can only see the activities he s done on the feed example posting in a group or replying to a discussion when you log out you can see all users activity to reproduce steps to reproduce the behavior logged in as a user post something on your group discussion go to activity feed you can see only see activities you have done on the feed log out and view the feed again you can see activities from all users log in as a different user you could not see the feed from other users expected behavior even if a user is logged in or not they must have the capability to view the activity of users screenshots or support ticket links
| 0
|
5,583
| 27,982,410,229
|
IssuesEvent
|
2023-03-26 10:07:27
|
beyarkay/eskom-calendar
|
https://api.github.com/repos/beyarkay/eskom-calendar
|
opened
|
Missing area schedule
|
waiting-on-maintainer missing-area-schedule
|
**What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
Please also give the province/municipality, our beautiful country has a surprising number of places that are named the same as each other. If you know what your area is named on EskomSePush, including that also helps a lot.
**Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
This really helps us figure out what's working!
**Any other information**
If you've got any other info you think might be helpful, feel free to leave it here
Hi,
I trust all is well. The details is as follow:
Area: Universitas
City: Bloemfontein
Province: Free State
Munic: Mangaung
Group 4
I bought Solar Assistant and saw it on their website
Thanks for the assistance.
Kind Regards,
Barry
|
True
|
Missing area schedule - **What area(s) couldn't you find on [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
Please also give the province/municipality, our beautiful country has a surprising number of places that are named the same as each other. If you know what your area is named on EskomSePush, including that also helps a lot.
**Where did you hear about [eskomcalendar.co.za](https://eskomcalendar.co.za/ec)?**
This really helps us figure out what's working!
**Any other information**
If you've got any other info you think might be helpful, feel free to leave it here
Hi,
I trust all is well. The details is as follow:
Area: Universitas
City: Bloemfontein
Province: Free State
Munic: Mangaung
Group 4
I bought Solar Assistant and saw it on their website
Thanks for the assistance.
Kind Regards,
Barry
|
main
|
missing area schedule what area s couldn t you find on please also give the province municipality our beautiful country has a surprising number of places that are named the same as each other if you know what your area is named on eskomsepush including that also helps a lot where did you hear about this really helps us figure out what s working any other information if you ve got any other info you think might be helpful feel free to leave it here hi i trust all is well the details is as follow area universitas city bloemfontein province free state munic mangaung group i bought solar assistant and saw it on their website thanks for the assistance kind regards barry
| 1
|
67,258
| 12,892,186,128
|
IssuesEvent
|
2020-07-13 19:08:06
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
closed
|
Write RFC to scale the bundle manager
|
team/code-intelligence
|
This will need to happen sooner than later. Our bundle manager service is currently designed to scale horizontally, but for technical reasons we cannot shard it and is effectively a singleton service. This will undoubtedly become a bottleneck.
Write an RFC that identifies the problem points that prevent horizontal scaling today.
|
1.0
|
Write RFC to scale the bundle manager - This will need to happen sooner than later. Our bundle manager service is currently designed to scale horizontally, but for technical reasons we cannot shard it and is effectively a singleton service. This will undoubtedly become a bottleneck.
Write an RFC that identifies the problem points that prevent horizontal scaling today.
|
non_main
|
write rfc to scale the bundle manager this will need to happen sooner than later our bundle manager service is currently designed to scale horizontally but for technical reasons we cannot shard it and is effectively a singleton service this will undoubtedly become a bottleneck write an rfc that identifies the problem points that prevent horizontal scaling today
| 0
|
2,896
| 10,319,663,928
|
IssuesEvent
|
2019-08-30 18:10:03
|
backdrop-ops/contrib
|
https://api.github.com/repos/backdrop-ops/contrib
|
closed
|
Port request: navbar
|
Maintainer application Port complete Port request
|
I think the admin menu of backdrop is great, but I think there is another very great module: the drupal navbar module. I think it has a great user experience; it would be great to port it in backdrop and maybe use it as a default admin menu, or maybe use a mixture between the backdrop admin menu and the drupal navbar module;
This issue was spun off from a comment by @fivepoints79 in
https://github.com/backdrop/backdrop-issues/issues/495#issuecomment-461456258
|
True
|
Port request: navbar - I think the admin menu of backdrop is great, but I think there is another very great module: the drupal navbar module. I think it has a great user experience; it would be great to port it in backdrop and maybe use it as a default admin menu, or maybe use a mixture between the backdrop admin menu and the drupal navbar module;
This issue was spun off from a comment by @fivepoints79 in
https://github.com/backdrop/backdrop-issues/issues/495#issuecomment-461456258
|
main
|
port request navbar i think the admin menu of backdrop is great but i think there is another very great module the drupal navbar module i think it has a great user experience it would be great to port it in backdrop and maybe use it as a default admin menu or maybe use a mixture between the backdrop admin menu and the drupal navbar module this issue was spun off from a comment by in
| 1
|
4,239
| 20,999,968,329
|
IssuesEvent
|
2022-03-29 16:29:23
|
aws/serverless-application-model
|
https://api.github.com/repos/aws/serverless-application-model
|
closed
|
CommaDelimitedList Property with AllowedValues - Parameter must be one of AllowedValues
|
type/feature maintainer/need-followup
|
<!--
Before reporting a new issue, make sure we don't have any duplicates already open or closed by
searching the issues list. If there is a duplicate, re-open or add a comment to the
existing issue instead of creating a new one. If you are reporting a bug,
make sure to include relevant information asked below to help with debugging.
## GENERAL HELP QUESTIONS ##
Github Issues is for bug reports and feature requests. If you have general support
questions, the following locations are a good place:
- Post a question in StackOverflow with "aws-sam" tag
-->
**Description:**
I'd like to use a CommaDelimitedList parameter with AllowedValues constraint. In my case I'd like to prompt user which Code Pipeline notifications should be delivered in the stack.
**Steps to reproduce the issue:**
1. Create a SAM template as follows (I'll omit Pipeline and SNSTopic creation):
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Parameters:
NotificationEvents:
Type: CommaDelimitedList
Default: "codepipeline-pipeline-pipeline-execution-failed"
AllowedValues:
- codepipeline-pipeline-action-execution-succeeded
- codepipeline-pipeline-action-execution-failed
- codepipeline-pipeline-action-execution-canceled
- codepipeline-pipeline-action-execution-started
- codepipeline-pipeline-stage-execution-started
- codepipeline-pipeline-stage-execution-succeeded
- codepipeline-pipeline-stage-execution-resumed
- codepipeline-pipeline-stage-execution-canceled
- codepipeline-pipeline-stage-execution-failed
- codepipeline-pipeline-pipeline-execution-failed
- codepipeline-pipeline-pipeline-execution-canceled
- codepipeline-pipeline-pipeline-execution-started
- codepipeline-pipeline-pipeline-execution-resumed
- codepipeline-pipeline-pipeline-execution-succeeded
- codepipeline-pipeline-pipeline-execution-superseded
- codepipeline-pipeline-manual-approval-failed
- codepipeline-pipeline-manual-approval-needed
- codepipeline-pipeline-manual-approval-succeeded
....
Notifications:
Type: AWS::CodeStarNotifications::NotificationRule
Properties:
DetailType: BASIC
EventTypeIds: !Ref NotificationEvents
Name: !Sub ${ServiceName}_codepipeline_notifications_rule
Resource: !Sub
- arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${PIPELINE}
- PIPELINE: !Ref Pipeline
Targets:
- TargetAddress: !Ref SNSTopic
TargetType: SNS
```
2. `sam build`
3. `sam deploy --guided`
4. Enter comma separated values for the parameter:
`Parameter NotificationEvents [codepipeline-pipeline-pipeline-execution-failed]: codepipeline-pipeline-pipeline-execution-failed,codepipeline-pipeline-pipeline-execution-succeeded`
**Observed result:**
Error: Failed to create changeset for the stack: _stack_name_, An error occurred (ValidationError) when calling the CreateChangeSet operation: Parameter 'NotificationEvents' must be one of AllowedValues
**Expected result:**
Parameter values got resolved as array.
|
True
|
CommaDelimitedList Property with AllowedValues - Parameter must be one of AllowedValues - <!--
Before reporting a new issue, make sure we don't have any duplicates already open or closed by
searching the issues list. If there is a duplicate, re-open or add a comment to the
existing issue instead of creating a new one. If you are reporting a bug,
make sure to include relevant information asked below to help with debugging.
## GENERAL HELP QUESTIONS ##
Github Issues is for bug reports and feature requests. If you have general support
questions, the following locations are a good place:
- Post a question in StackOverflow with "aws-sam" tag
-->
**Description:**
I'd like to use a CommaDelimitedList parameter with AllowedValues constraint. In my case I'd like to prompt user which Code Pipeline notifications should be delivered in the stack.
**Steps to reproduce the issue:**
1. Create a SAM template as follows (I'll omit Pipeline and SNSTopic creation):
```
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Parameters:
NotificationEvents:
Type: CommaDelimitedList
Default: "codepipeline-pipeline-pipeline-execution-failed"
AllowedValues:
- codepipeline-pipeline-action-execution-succeeded
- codepipeline-pipeline-action-execution-failed
- codepipeline-pipeline-action-execution-canceled
- codepipeline-pipeline-action-execution-started
- codepipeline-pipeline-stage-execution-started
- codepipeline-pipeline-stage-execution-succeeded
- codepipeline-pipeline-stage-execution-resumed
- codepipeline-pipeline-stage-execution-canceled
- codepipeline-pipeline-stage-execution-failed
- codepipeline-pipeline-pipeline-execution-failed
- codepipeline-pipeline-pipeline-execution-canceled
- codepipeline-pipeline-pipeline-execution-started
- codepipeline-pipeline-pipeline-execution-resumed
- codepipeline-pipeline-pipeline-execution-succeeded
- codepipeline-pipeline-pipeline-execution-superseded
- codepipeline-pipeline-manual-approval-failed
- codepipeline-pipeline-manual-approval-needed
- codepipeline-pipeline-manual-approval-succeeded
....
Notifications:
Type: AWS::CodeStarNotifications::NotificationRule
Properties:
DetailType: BASIC
EventTypeIds: !Ref NotificationEvents
Name: !Sub ${ServiceName}_codepipeline_notifications_rule
Resource: !Sub
- arn:aws:codepipeline:${AWS::Region}:${AWS::AccountId}:${PIPELINE}
- PIPELINE: !Ref Pipeline
Targets:
- TargetAddress: !Ref SNSTopic
TargetType: SNS
```
2. `sam build`
3. `sam deploy --guided`
4. Enter comma separated values for the parameter:
`Parameter NotificationEvents [codepipeline-pipeline-pipeline-execution-failed]: codepipeline-pipeline-pipeline-execution-failed,codepipeline-pipeline-pipeline-execution-succeeded`
**Observed result:**
Error: Failed to create changeset for the stack: _stack_name_, An error occurred (ValidationError) when calling the CreateChangeSet operation: Parameter 'NotificationEvents' must be one of AllowedValues
**Expected result:**
Parameter values got resolved as array.
|
main
|
commadelimitedlist property with allowedvalues parameter must be one of allowedvalues before reporting a new issue make sure we don t have any duplicates already open or closed by searching the issues list if there is a duplicate re open or add a comment to the existing issue instead of creating a new one if you are reporting a bug make sure to include relevant information asked below to help with debugging general help questions github issues is for bug reports and feature requests if you have general support questions the following locations are a good place post a question in stackoverflow with aws sam tag description i d like to use a commadelimitedlist parameter with allowedvalues constraint in my case i d like to prompt user which code pipeline notifications should be delivered in the stack steps to reproduce the issue create a sam template as follows i ll omit pipeline and snstopic creation awstemplateformatversion transform aws serverless parameters notificationevents type commadelimitedlist default codepipeline pipeline pipeline execution failed allowedvalues codepipeline pipeline action execution succeeded codepipeline pipeline action execution failed codepipeline pipeline action execution canceled codepipeline pipeline action execution started codepipeline pipeline stage execution started codepipeline pipeline stage execution succeeded codepipeline pipeline stage execution resumed codepipeline pipeline stage execution canceled codepipeline pipeline stage execution failed codepipeline pipeline pipeline execution failed codepipeline pipeline pipeline execution canceled codepipeline pipeline pipeline execution started codepipeline pipeline pipeline execution resumed codepipeline pipeline pipeline execution succeeded codepipeline pipeline pipeline execution superseded codepipeline pipeline manual approval failed codepipeline pipeline manual approval needed codepipeline pipeline manual approval succeeded notifications type aws codestarnotifications notificationrule properties detailtype basic eventtypeids ref notificationevents name sub servicename codepipeline notifications rule resource sub arn aws codepipeline aws region aws accountid pipeline pipeline ref pipeline targets targetaddress ref snstopic targettype sns sam build sam deploy guided enter comma separated values for the parameter parameter notificationevents codepipeline pipeline pipeline execution failed codepipeline pipeline pipeline execution succeeded observed result error failed to create changeset for the stack stack name an error occurred validationerror when calling the createchangeset operation parameter notificationevents must be one of allowedvalues expected result parameter values got resolved as array
| 1
|
56,227
| 14,984,678,726
|
IssuesEvent
|
2021-01-28 18:56:54
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
closed
|
Table Field module incorrectly saves the values
|
Core Application Team Defect Drupal engineering Unplanned work
|
**Describe the defect**
When the table field saves it's data, it is injecting the `caption` field into the `values` field. This causes the `values` field to be serialized as an object instead of an array. This causes the output of the JSON created by CMS export to be inconsistent depending on if the caption exists.
With the correct data structure
```
"field_facility_service_hours": [
{
"caption": "caption",
"value": [
[
"Mon",
""
],
[
"Tue",
""
],
```
With incorrect data structure:
```
"field_facility_service_hours": [
{
"value": {
"0": [
"Mon",
"8:00 a.m. to 4:30 p.m. ET"
],
"caption": "",
"1": [
"Tue",
"8:00 a.m. to 4:30 p.m. ET"
],
```
The fix requires a patch to the table field module to the file `Drupal\tablefield\Plugin\Field\FieldType\TablefieldItem`
Line 182 inside of the `setValue` method remove the following code:
```
if (isset($values['caption'])) {
$values['value']['caption'] = $values['caption'];
}
```
Related slack discussions:
* https://dsva.slack.com/archives/C01512KN35G/p1611760815169800
AC:
* The caption does not save in the `values` field in the field table
* The default value has the caption in the correct place
* The correct caption shows up on the view and edit pages of the node
* All of the current table fields have fixed content
|
1.0
|
Table Field module incorrectly saves the values - **Describe the defect**
When the table field saves it's data, it is injecting the `caption` field into the `values` field. This causes the `values` field to be serialized as an object instead of an array. This causes the output of the JSON created by CMS export to be inconsistent depending on if the caption exists.
With the correct data structure
```
"field_facility_service_hours": [
{
"caption": "caption",
"value": [
[
"Mon",
""
],
[
"Tue",
""
],
```
With incorrect data structure:
```
"field_facility_service_hours": [
{
"value": {
"0": [
"Mon",
"8:00 a.m. to 4:30 p.m. ET"
],
"caption": "",
"1": [
"Tue",
"8:00 a.m. to 4:30 p.m. ET"
],
```
The fix requires a patch to the table field module to the file `Drupal\tablefield\Plugin\Field\FieldType\TablefieldItem`
Line 182 inside of the `setValue` method remove the following code:
```
if (isset($values['caption'])) {
$values['value']['caption'] = $values['caption'];
}
```
Related slack discussions:
* https://dsva.slack.com/archives/C01512KN35G/p1611760815169800
AC:
* The caption does not save in the `values` field in the field table
* The default value has the caption in the correct place
* The correct caption shows up on the view and edit pages of the node
* All of the current table fields have fixed content
|
non_main
|
table field module incorrectly saves the values describe the defect when the table field saves it s data it is injecting the caption field into the values field this causes the values field to be serialized as an object instead of an array this causes the output of the json created by cms export to be inconsistent depending on if the caption exists with the correct data structure field facility service hours caption caption value mon tue with incorrect data structure field facility service hours value mon a m to p m et caption tue a m to p m et the fix requires a patch to the table field module to the file drupal tablefield plugin field fieldtype tablefielditem line inside of the setvalue method remove the following code if isset values values values related slack discussions ac the caption does not save in the values field in the field table the default value has the caption in the correct place the correct caption shows up on the view and edit pages of the node all of the current table fields have fixed content
| 0
|
1,000
| 4,769,875,134
|
IssuesEvent
|
2016-10-26 13:53:08
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
Key does not seem to have been added (but it has)
|
affects_2.1 bug_report waiting_on_maintainer
|
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`apt_key`
##### ANSIBLE VERSION
```
ansible 2.1.2.0
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The module says it fails but the key has successfully been added on servers
##### STEPS TO REPRODUCE
```
- name: add key for yarn package
apt_key:
keyserver: pgp.mit.edu
id: D101F7899D41F3C3
#temp fix because of the "key does not seem to have been added"
#ignore_errors: yes
become: yes
become_user: root
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Changed or OK output
##### ACTUAL RESULTS
```
FAILED! => {"changed": false, "failed": true, "id": "9D41F3C3", "msg": "key does not seem to have been added"}
```
But when running the command `apt-key list`, **the right key for yarn is listed**.
|
True
|
Key does not seem to have been added (but it has) - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`apt_key`
##### ANSIBLE VERSION
```
ansible 2.1.2.0
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The module says it fails but the key has successfully been added on servers
##### STEPS TO REPRODUCE
```
- name: add key for yarn package
apt_key:
keyserver: pgp.mit.edu
id: D101F7899D41F3C3
#temp fix because of the "key does not seem to have been added"
#ignore_errors: yes
become: yes
become_user: root
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Changed or OK output
##### ACTUAL RESULTS
```
FAILED! => {"changed": false, "failed": true, "id": "9D41F3C3", "msg": "key does not seem to have been added"}
```
But when running the command `apt-key list`, **the right key for yarn is listed**.
|
main
|
key does not seem to have been added but it has issue type bug report component name apt key ansible version ansible os environment n a summary the module says it fails but the key has successfully been added on servers steps to reproduce name add key for yarn package apt key keyserver pgp mit edu id temp fix because of the key does not seem to have been added ignore errors yes become yes become user root expected results changed or ok output actual results failed changed false failed true id msg key does not seem to have been added but when running the command apt key list the right key for yarn is listed
| 1
|
580
| 4,069,151,074
|
IssuesEvent
|
2016-05-27 01:55:39
|
Particular/NServiceBus.Host.AzureCloudService
|
https://api.github.com/repos/Particular/NServiceBus.Host.AzureCloudService
|
closed
|
Unnecessary Cloud Service role instance recycling on removal of a dynamic hosted endpoint from storage
|
State: In Progress - Maintainer Prio Tag: Maintainer Prio Type: Bug
|
## Who's affected
* All users using Azure Cloud Service dynamic host (multi-host) that need to remove dynamic endpoints from the host.
## Symptoms
When a dynamically hosted endpoint is not longer needed and removed from a storage account, Azure dynamic host performs unnecessary role instance recycling. This causes all other multi-hosted endpoints to restart as well.
## Impact assessment
For single instance cloud services it means endpoints will be down during service restart.
Low for scaled out roles as other instances should be continue while restarting instance is not available.
## Additional info
Origianl [community PR](https://github.com/Particular/NServiceBus.Host.AzureCloudService/pull/18)
|
True
|
Unnecessary Cloud Service role instance recycling on removal of a dynamic hosted endpoint from storage - ## Who's affected
* All users using Azure Cloud Service dynamic host (multi-host) that need to remove dynamic endpoints from the host.
## Symptoms
When a dynamically hosted endpoint is not longer needed and removed from a storage account, Azure dynamic host performs unnecessary role instance recycling. This causes all other multi-hosted endpoints to restart as well.
## Impact assessment
For single instance cloud services it means endpoints will be down during service restart.
Low for scaled out roles as other instances should be continue while restarting instance is not available.
## Additional info
Origianl [community PR](https://github.com/Particular/NServiceBus.Host.AzureCloudService/pull/18)
|
main
|
unnecessary cloud service role instance recycling on removal of a dynamic hosted endpoint from storage who s affected all users using azure cloud service dynamic host multi host that need to remove dynamic endpoints from the host symptoms when a dynamically hosted endpoint is not longer needed and removed from a storage account azure dynamic host performs unnecessary role instance recycling this causes all other multi hosted endpoints to restart as well impact assessment for single instance cloud services it means endpoints will be down during service restart low for scaled out roles as other instances should be continue while restarting instance is not available additional info origianl
| 1
|
235,318
| 19,323,150,062
|
IssuesEvent
|
2021-12-14 08:34:36
|
asyncapi/ts-nats-template
|
https://api.github.com/repos/asyncapi/ts-nats-template
|
closed
|
Introduce TCK as a CI test
|
enhancement tests stale
|
#### Reason/Context
Introduce the [TCK](https://github.com/asyncapi/tck) to ensure all valid AsyncAPI documents works as expected. i.e. can be generated and tests are succeeding.
#### Description
How this should be solved is up for discussion.
Depending on the time it takes to finish all the tests by the CI we might have to add a GH action which runs in the morning/evening each day and if any errors are found it should create a PR (if one not already exist) that a certain document are failing to be rendered. Maybe this can be used through all our repositories 🤔 ?
Blocked by:
- #10
- #9
|
1.0
|
Introduce TCK as a CI test - #### Reason/Context
Introduce the [TCK](https://github.com/asyncapi/tck) to ensure all valid AsyncAPI documents works as expected. i.e. can be generated and tests are succeeding.
#### Description
How this should be solved is up for discussion.
Depending on the time it takes to finish all the tests by the CI we might have to add a GH action which runs in the morning/evening each day and if any errors are found it should create a PR (if one not already exist) that a certain document are failing to be rendered. Maybe this can be used through all our repositories 🤔 ?
Blocked by:
- #10
- #9
|
non_main
|
introduce tck as a ci test reason context introduce the to ensure all valid asyncapi documents works as expected i e can be generated and tests are succeeding description how this should be solved is up for discussion depending on the time it takes to finish all the tests by the ci we might have to add a gh action which runs in the morning evening each day and if any errors are found it should create a pr if one not already exist that a certain document are failing to be rendered maybe this can be used through all our repositories 🤔 blocked by
| 0
|
426,968
| 29,670,247,903
|
IssuesEvent
|
2023-06-11 10:12:25
|
AttractorSchool/ESDP-AP-10-3
|
https://api.github.com/repos/AttractorSchool/ESDP-AP-10-3
|
closed
|
Добавить страницу о "Docker" в Wiki проекта[5]
|
documentation
|
## Для чего это нужно ?
Docker решает проблемы зависимостей и рабочего окружения. Контейнеры позволяют упаковать в единый образ приложение и все его зависимости: библиотеки, системные утилиты и файлы настройки. Это упрощает перенос приложения на другую инфраструктуру. Так как нам придется скоро начинать работу с Docker, wiki страница будет полезна для ознакомления с Docker.
## Что нужно сделать
- Найти в интернете нужные источники подробно описывающие работу Docker
- Кратко описать работу Docker
- Добавить ссылки для подробного изучения
## Планируемое время
5 часов
|
1.0
|
Добавить страницу о "Docker" в Wiki проекта[5] - ## Для чего это нужно ?
Docker решает проблемы зависимостей и рабочего окружения. Контейнеры позволяют упаковать в единый образ приложение и все его зависимости: библиотеки, системные утилиты и файлы настройки. Это упрощает перенос приложения на другую инфраструктуру. Так как нам придется скоро начинать работу с Docker, wiki страница будет полезна для ознакомления с Docker.
## Что нужно сделать
- Найти в интернете нужные источники подробно описывающие работу Docker
- Кратко описать работу Docker
- Добавить ссылки для подробного изучения
## Планируемое время
5 часов
|
non_main
|
добавить страницу о docker в wiki проекта для чего это нужно docker решает проблемы зависимостей и рабочего окружения контейнеры позволяют упаковать в единый образ приложение и все его зависимости библиотеки системные утилиты и файлы настройки это упрощает перенос приложения на другую инфраструктуру так как нам придется скоро начинать работу с docker wiki страница будет полезна для ознакомления с docker что нужно сделать найти в интернете нужные источники подробно описывающие работу docker кратко описать работу docker добавить ссылки для подробного изучения планируемое время часов
| 0
|
2,413
| 8,569,230,279
|
IssuesEvent
|
2018-11-11 08:11:42
|
ansible/ansible
|
https://api.github.com/repos/ansible/ansible
|
closed
|
AttributeError: 'EC2' object has no attribute 'describe_iam_instance_profile_associations'
|
affects_2.6 aws bug cloud module needs_maintainer support:community traceback
|
<!---
Verify first that your issue/request is not already reported on GitHub.
THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.
Also test if the latest release, and devel branch are affected too.
ALWAYS add information AFTER (OUTSIDE) these html comments.
Otherwise it may end up being automatically closed by our bot. -->
##### SUMMARY
<!--- Explain the problem briefly -->
The `ec2_instance` module crashes with the following error message:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'EC2' object has no attribute 'describe_iam_instance_profile_associations'
I locally installed the following python modules: boto, boto3 and botocore
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.
Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path-->
ec2_instance
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
ansible 2.6.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).-->
The output of `$ ansible-config dump --only-changed` is:
DEFAULT_HOST_LIST(/home/ubuntu/bitbucket/dev-ops/ansible/ansible.cfg) = [u'/home/ubuntu/bitbucket/dev-ops/ansible/ec2.py']
DEFAULT_VAULT_PASSWORD_FILE(/home/ubuntu/bitbucket/dev-ops/ansible/ansible.cfg) = /home/ubuntu/.ansible_vault_password.txt
HOST_KEY_CHECKING(/home/ubuntu/bitbucket/dev-ops/ansible/ansible.cfg) = False
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
Ubuntu 16.04
The module `ec2_instance` is executed locally.
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
<!--- Paste example playbooks or commands between quotes below -->
Basic playbook to reproduce the error:
```yaml
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Build list of EC2 instances
ec2_instance_facts:
filters:
"tag:monitoring": "test"
register: result_instances
- name: Attach CloudWatch Agent IAM role to host
ec2_instance:
instance_ids: "{{ item.instance_id }}"
instance_role: "{{ mds_cloud_watch_agent_iam_role_name }}"
with_items: "{{ result_instances.instances }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expected the IAM role to be attached to the EC2 instance(s).
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
ansible-playbook 2.6.2
config file = /home/ubuntu/bitbucket/dev-ops/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
Using /home/ubuntu/bitbucket/dev-ops/ansible/ansible.cfg as config file
Parsed /home/ubuntu/bitbucket/dev-ops/ansible/ec2.py inventory source with script plugin
PLAYBOOK: tst.yml ********************************************************************************************************************************************
1 plays in tst.yml
PLAY [localhost] *********************************************************************************************************************************************
META: ran handlers
TASK [Build list of EC2 instances] ***************************************************************************************************************************
task path: /home/ubuntu/bitbucket/dev-ops/ansible/tst.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077 `" && echo ansible-tmp-1533143862.3-178131144353077="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/ec2_instance_facts.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-16222fm313b/tmpM3bUJn TO /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077/ec2_instance_facts.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077/ /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077/ec2_instance_facts.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077/ec2_instance_facts.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"instances": [
HIDDEN FOR CONFIDENTIALITY
],
"invocation": {
"module_args": {
"aws_access_key": null,
"aws_secret_key": null,
"ec2_url": null,
"filters": {
"tag:monitoring": "test"
},
"instance_ids": [],
"profile": null,
"region": null,
"security_token": null,
"validate_certs": true
}
}
}
TASK [Attach CloudWatch Agent IAM role to host] **************************************************************************************************************
task path: /home/ubuntu/bitbucket/dev-ops/ansible/tst.yml:13
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906 `" && echo ansible-tmp-1533143862.89-184565039690906="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/ec2_instance.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-16222fm313b/tmpT8dRlh TO /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906/ec2_instance.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906/ /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906/ec2_instance.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906/ec2_instance.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py", line 1542, in <module>
main()
File "/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py", line 1527, in main
ensure_present(existing_matches=existing_matches, changed=changed, ec2=ec2, state=state)
File "/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py", line 1372, in ensure_present
handle_existing(existing_matches, changed, ec2, state)
File "/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py", line 1358, in handle_existing
changed |= add_or_update_instance_profile(existing_matches[0], module.params.get('instance_role'))
File "/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py", line 754, in add_or_update_instance_profile
association = ec2.describe_iam_instance_profile_associations(Filters=[{'Name': 'instance-id', 'Values': [instance['InstanceId']]}])
AttributeError: 'EC2' object has no attribute 'describe_iam_instance_profile_associations'
failed: [localhost] (item={HIDDEN FOR CONFIDENTIALITY},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py\", line 1542, in <module>\n main()\n File \"/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py\", line 1527, in main\n ensure_present(existing_matches=existing_matches, changed=changed, ec2=ec2, state=state)\n File \"/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py\", line 1372, in ensure_present\n handle_existing(existing_matches, changed, ec2, state)\n File \"/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py\", line 1358, in handle_existing\n changed |= add_or_update_instance_profile(existing_matches[0], module.params.get('instance_role'))\n File \"/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py\", line 754, in add_or_update_instance_profile\n association = ec2.describe_iam_instance_profile_associations(Filters=[{'Name': 'instance-id', 'Values': [instance['InstanceId']]}])\nAttributeError: 'EC2' object has no attribute 'describe_iam_instance_profile_associations'\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
to retry, use: --limit @/home/ubuntu/bitbucket/dev-ops/ansible/tst.retry
PLAY RECAP ***************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
ubuntu@ip-172-31-19-46:~/bitbucket/dev-ops/ansible$
```
|
True
|
AttributeError: 'EC2' object has no attribute 'describe_iam_instance_profile_associations' - <!---
Verify first that your issue/request is not already reported on GitHub.
THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.
Also test if the latest release, and devel branch are affected too.
ALWAYS add information AFTER (OUTSIDE) these html comments.
Otherwise it may end up being automatically closed by our bot. -->
##### SUMMARY
<!--- Explain the problem briefly -->
The `ec2_instance` module crashes with the following error message:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'EC2' object has no attribute 'describe_iam_instance_profile_associations'
I locally installed the following python modules: boto, boto3 and botocore
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.
Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path-->
ec2_instance
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
ansible 2.6.2
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).-->
The output of `$ ansible-config dump --only-changed` is:
DEFAULT_HOST_LIST(/home/ubuntu/bitbucket/dev-ops/ansible/ansible.cfg) = [u'/home/ubuntu/bitbucket/dev-ops/ansible/ec2.py']
DEFAULT_VAULT_PASSWORD_FILE(/home/ubuntu/bitbucket/dev-ops/ansible/ansible.cfg) = /home/ubuntu/.ansible_vault_password.txt
HOST_KEY_CHECKING(/home/ubuntu/bitbucket/dev-ops/ansible/ansible.cfg) = False
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
Ubuntu 16.04
The module `ec2_instance` is executed locally.
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
<!--- Paste example playbooks or commands between quotes below -->
Basic playbook to reproduce the error:
```yaml
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Build list of EC2 instances
ec2_instance_facts:
filters:
"tag:monitoring": "test"
register: result_instances
- name: Attach CloudWatch Agent IAM role to host
ec2_instance:
instance_ids: "{{ item.instance_id }}"
instance_role: "{{ mds_cloud_watch_agent_iam_role_name }}"
with_items: "{{ result_instances.instances }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expected the IAM role to be attached to the EC2 instance(s).
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
ansible-playbook 2.6.2
config file = /home/ubuntu/bitbucket/dev-ops/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
Using /home/ubuntu/bitbucket/dev-ops/ansible/ansible.cfg as config file
Parsed /home/ubuntu/bitbucket/dev-ops/ansible/ec2.py inventory source with script plugin
PLAYBOOK: tst.yml ********************************************************************************************************************************************
1 plays in tst.yml
PLAY [localhost] *********************************************************************************************************************************************
META: ran handlers
TASK [Build list of EC2 instances] ***************************************************************************************************************************
task path: /home/ubuntu/bitbucket/dev-ops/ansible/tst.yml:7
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077 `" && echo ansible-tmp-1533143862.3-178131144353077="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/ec2_instance_facts.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-16222fm313b/tmpM3bUJn TO /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077/ec2_instance_facts.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077/ /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077/ec2_instance_facts.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077/ec2_instance_facts.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.3-178131144353077/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"instances": [
HIDDEN FOR CONFIDENTIALITY
],
"invocation": {
"module_args": {
"aws_access_key": null,
"aws_secret_key": null,
"ec2_url": null,
"filters": {
"tag:monitoring": "test"
},
"instance_ids": [],
"profile": null,
"region": null,
"security_token": null,
"validate_certs": true
}
}
}
TASK [Attach CloudWatch Agent IAM role to host] **************************************************************************************************************
task path: /home/ubuntu/bitbucket/dev-ops/ansible/tst.yml:13
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<127.0.0.1> EXEC /bin/sh -c 'echo ~ubuntu && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906 `" && echo ansible-tmp-1533143862.89-184565039690906="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906 `" ) && sleep 0'
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/amazon/ec2_instance.py
<127.0.0.1> PUT /home/ubuntu/.ansible/tmp/ansible-local-16222fm313b/tmpT8dRlh TO /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906/ec2_instance.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906/ /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906/ec2_instance.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906/ec2_instance.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ubuntu/.ansible/tmp/ansible-tmp-1533143862.89-184565039690906/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py", line 1542, in <module>
main()
File "/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py", line 1527, in main
ensure_present(existing_matches=existing_matches, changed=changed, ec2=ec2, state=state)
File "/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py", line 1372, in ensure_present
handle_existing(existing_matches, changed, ec2, state)
File "/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py", line 1358, in handle_existing
changed |= add_or_update_instance_profile(existing_matches[0], module.params.get('instance_role'))
File "/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py", line 754, in add_or_update_instance_profile
association = ec2.describe_iam_instance_profile_associations(Filters=[{'Name': 'instance-id', 'Values': [instance['InstanceId']]}])
AttributeError: 'EC2' object has no attribute 'describe_iam_instance_profile_associations'
failed: [localhost] (item={HIDDEN FOR CONFIDENTIALITY},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py\", line 1542, in <module>\n main()\n File \"/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py\", line 1527, in main\n ensure_present(existing_matches=existing_matches, changed=changed, ec2=ec2, state=state)\n File \"/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py\", line 1372, in ensure_present\n handle_existing(existing_matches, changed, ec2, state)\n File \"/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py\", line 1358, in handle_existing\n changed |= add_or_update_instance_profile(existing_matches[0], module.params.get('instance_role'))\n File \"/tmp/ansible_Qnuf0B/ansible_module_ec2_instance.py\", line 754, in add_or_update_instance_profile\n association = ec2.describe_iam_instance_profile_associations(Filters=[{'Name': 'instance-id', 'Values': [instance['InstanceId']]}])\nAttributeError: 'EC2' object has no attribute 'describe_iam_instance_profile_associations'\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
to retry, use: --limit @/home/ubuntu/bitbucket/dev-ops/ansible/tst.retry
PLAY RECAP ***************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1
ubuntu@ip-172-31-19-46:~/bitbucket/dev-ops/ansible$
```
|
main
|
attributeerror object has no attribute describe iam instance profile associations verify first that your issue request is not already reported on github this form will be read by a machine complete all sections as described also test if the latest release and devel branch are affected too always add information after outside these html comments otherwise it may end up being automatically closed by our bot summary the instance module crashes with the following error message an exception occurred during task execution to see the full traceback use vvv the error was attributeerror object has no attribute describe iam instance profile associations i locally installed the following python modules boto and botocore issue type bug report component name insert below this comment the name of the module plugin task or feature do not include extra details here e g vyos command not the network module vyos command or the full path instance ansible version ansible config file etc ansible ansible cfg configured module search path ansible python module location usr lib dist packages ansible executable location usr bin ansible python version default dec configuration if using ansible or above paste below this comment the results of ansible config dump only changed otherwise mention any settings you have changed added removed in ansible cfg or using the ansible environment variables the output of ansible config dump only changed is default host list home ubuntu bitbucket dev ops ansible ansible cfg default vault password file home ubuntu bitbucket dev ops ansible ansible cfg home ubuntu ansible vault password txt host key checking home ubuntu bitbucket dev ops ansible ansible cfg false os environment mention below this comment the os you are running ansible from and the os you are managing or say n a for anything that is not platform specific also mention the specific version of what you are trying to control e g if this is a network bug the version of firmware on the network device ubuntu the module instance is executed locally steps to reproduce for bugs show exactly how to reproduce the problem using a minimal test case for new features show how the feature would be used basic playbook to reproduce the error yaml hosts localhost connection local gather facts no tasks name build list of instances instance facts filters tag monitoring test register result instances name attach cloudwatch agent iam role to host instance instance ids item instance id instance role mds cloud watch agent iam role name with items result instances instances expected results i expected the iam role to be attached to the instance s actual results ansible playbook config file home ubuntu bitbucket dev ops ansible ansible cfg configured module search path ansible python module location usr lib dist packages ansible executable location usr bin ansible playbook python version default dec using home ubuntu bitbucket dev ops ansible ansible cfg as config file parsed home ubuntu bitbucket dev ops ansible py inventory source with script plugin playbook tst yml plays in tst yml play meta ran handlers task task path home ubuntu bitbucket dev ops ansible tst yml establish local connection for user ubuntu exec bin sh c echo ubuntu sleep exec bin sh c umask mkdir p echo home ubuntu ansible tmp ansible tmp echo ansible tmp echo home ubuntu ansible tmp ansible tmp sleep using module file usr lib dist packages ansible modules cloud amazon instance facts py put home ubuntu ansible tmp ansible local to home ubuntu ansible tmp ansible tmp instance facts py exec bin sh c chmod u x home ubuntu ansible tmp ansible tmp home ubuntu ansible tmp ansible tmp instance facts py sleep exec bin sh c usr bin python home ubuntu ansible tmp ansible tmp instance facts py sleep exec bin sh c rm f r home ubuntu ansible tmp ansible tmp dev null sleep ok changed false instances hidden for confidentiality invocation module args aws access key null aws secret key null url null filters tag monitoring test instance ids profile null region null security token null validate certs true task task path home ubuntu bitbucket dev ops ansible tst yml establish local connection for user ubuntu exec bin sh c echo ubuntu sleep exec bin sh c umask mkdir p echo home ubuntu ansible tmp ansible tmp echo ansible tmp echo home ubuntu ansible tmp ansible tmp sleep using module file usr lib dist packages ansible modules cloud amazon instance py put home ubuntu ansible tmp ansible local to home ubuntu ansible tmp ansible tmp instance py exec bin sh c chmod u x home ubuntu ansible tmp ansible tmp home ubuntu ansible tmp ansible tmp instance py sleep exec bin sh c usr bin python home ubuntu ansible tmp ansible tmp instance py sleep exec bin sh c rm f r home ubuntu ansible tmp ansible tmp dev null sleep the full traceback is traceback most recent call last file tmp ansible ansible module instance py line in main file tmp ansible ansible module instance py line in main ensure present existing matches existing matches changed changed state state file tmp ansible ansible module instance py line in ensure present handle existing existing matches changed state file tmp ansible ansible module instance py line in handle existing changed add or update instance profile existing matches module params get instance role file tmp ansible ansible module instance py line in add or update instance profile association describe iam instance profile associations filters attributeerror object has no attribute describe iam instance profile associations failed item hidden for confidentiality module stderr traceback most recent call last n file tmp ansible ansible module instance py line in n main n file tmp ansible ansible module instance py line in main n ensure present existing matches existing matches changed changed state state n file tmp ansible ansible module instance py line in ensure present n handle existing existing matches changed state n file tmp ansible ansible module instance py line in handle existing n changed add or update instance profile existing matches module params get instance role n file tmp ansible ansible module instance py line in add or update instance profile n association describe iam instance profile associations filters nattributeerror object has no attribute describe iam instance profile associations n module stdout msg module failure rc to retry use limit home ubuntu bitbucket dev ops ansible tst retry play recap localhost ok changed unreachable failed ubuntu ip bitbucket dev ops ansible
| 1
|
660,827
| 22,032,648,686
|
IssuesEvent
|
2022-05-28 04:35:39
|
hackforla/tdm-calculator
|
https://api.github.com/repos/hackforla/tdm-calculator
|
closed
|
Implement Bonus Package information to accordians on Calculation/4
|
role: front-end level: hard decision p-Feature - Bonus Packages dependencies priority: MUST HAVE p-Feature - Strategies Page
|
### Overview
We have a new design for showing the bonus package information on the site that we will need to implement. Dependency to issue #1137
### Action Items
- [x] Eliminate conditional page /4 for users qualifying for bonus packages
- [x] add bonus package information as tool tips to the bonus package section
- [x] rename bonus package sections and information to match design
- [x] change color of header on bonus package to match design
- [x] add box to top of Page /4 with "you qualify for bonus package to earn 1 extra point"
- [x] Make only one bonus point awarded even when both packages are checked.
- [x] Review changes with UI/UX
- [ ] revise wording on the bonus package tooltip to say:
'Small development projects (Program Level 1) are eligible for one or more TDM packages that allow the fulfillment of the minimum 15-point target. A bonus point is awarded for a package that is made up of strategies that work together to reinforce their effectiveness in reducing drive-alone trips.
Bonus Packages may not be ideal for all projects but are a way to provide easy compliance and implementation for small projects'.
### Resources/Instructions
[Prototype Figma Link](https://www.figma.com/proto/nD9QK56Mzq7xNSaSUoeGx0/TDM-Calculator?page-id=5293%3A18692&node-id=5444%3A19449&viewport=241%2C48%2C0.25&scaling=min-zoom&starting-point-node-id=5444%3A19449&show-proto-sidebar=1)




|
1.0
|
Implement Bonus Package information to accordians on Calculation/4 - ### Overview
We have a new design for showing the bonus package information on the site that we will need to implement. Dependency to issue #1137
### Action Items
- [x] Eliminate conditional page /4 for users qualifying for bonus packages
- [x] add bonus package information as tool tips to the bonus package section
- [x] rename bonus package sections and information to match design
- [x] change color of header on bonus package to match design
- [x] add box to top of Page /4 with "you qualify for bonus package to earn 1 extra point"
- [x] Make only one bonus point awarded even when both packages are checked.
- [x] Review changes with UI/UX
- [ ] revise wording on the bonus package tooltip to say:
'Small development projects (Program Level 1) are eligible for one or more TDM packages that allow the fulfillment of the minimum 15-point target. A bonus point is awarded for a package that is made up of strategies that work together to reinforce their effectiveness in reducing drive-alone trips.
Bonus Packages may not be ideal for all projects but are a way to provide easy compliance and implementation for small projects'.
### Resources/Instructions
[Prototype Figma Link](https://www.figma.com/proto/nD9QK56Mzq7xNSaSUoeGx0/TDM-Calculator?page-id=5293%3A18692&node-id=5444%3A19449&viewport=241%2C48%2C0.25&scaling=min-zoom&starting-point-node-id=5444%3A19449&show-proto-sidebar=1)




|
non_main
|
implement bonus package information to accordians on calculation overview we have a new design for showing the bonus package information on the site that we will need to implement dependency to issue action items eliminate conditional page for users qualifying for bonus packages add bonus package information as tool tips to the bonus package section rename bonus package sections and information to match design change color of header on bonus package to match design add box to top of page with you qualify for bonus package to earn extra point make only one bonus point awarded even when both packages are checked review changes with ui ux revise wording on the bonus package tooltip to say small development projects program level are eligible for one or more tdm packages that allow the fulfillment of the minimum point target a bonus point is awarded for a package that is made up of strategies that work together to reinforce their effectiveness in reducing drive alone trips bonus packages may not be ideal for all projects but are a way to provide easy compliance and implementation for small projects resources instructions
| 0
|
606
| 4,104,079,971
|
IssuesEvent
|
2016-06-05 04:31:56
|
spyder-ide/spyder
|
https://api.github.com/repos/spyder-ide/spyder
|
closed
|
Spyder doesn't work with QtWebEngine
|
Type-Bug Type-Maintainability
|
## Description of your problem
I tried to launch spyder 3.0.0 beta2 from command line with qt5 installed from homebrew, sip and pyqt5 install from source code and spyder preview installed from pip, when I launched it, the error log said
Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/spyderlib/spyder.py", line 3119, in main
mainwindow = run_spyder(app, options, args)
File "/usr/local/lib/python3.5/site-packages/spyderlib/spyder.py", line 3005, in run_spyder
main.setup()
File "/usr/local/lib/python3.5/site-packages/spyderlib/spyder.py", line 822, in setup
message=_("Spyder Internal Console\n\n"
File "/usr/local/lib/python3.5/site-packages/spyderlib/plugins/console.py", line 79, in __init__
self.find_widget.set_editor(self.shell)
File "/usr/local/lib/python3.5/site-packages/spyderlib/widgets/findreplace.py", line 250, in set_editor
from spyderlib.qt.QtWebKit import QWebView
File "/usr/local/lib/python3.5/site-packages/spyderlib/qt/QtWebKit.py", line 10, in <module>
from PyQt5.QtWebKitWidgets import QWebPage, QWebView # analysis:ignore
ImportError: dlopen(/usr/local/lib/python3.5/site-packages/PyQt5/QtWebKitWidgets.so, 2): Library not loaded: /usr/local/opt/qt5/lib/QtWebKitWidgets.framework/Versions/5/QtWebKitWidgets
Referenced from: /usr/local/lib/python3.5/site-packages/PyQt5/QtWebKitWidgets.so
Reason: image not found
* Spyder Version: 3.0.0 beta2
* Python Version: 3.5.1
* Operating system: OS X 10.11 El Cap
|
True
|
Spyder doesn't work with QtWebEngine - ## Description of your problem
I tried to launch spyder 3.0.0 beta2 from command line with qt5 installed from homebrew, sip and pyqt5 install from source code and spyder preview installed from pip, when I launched it, the error log said
Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/spyderlib/spyder.py", line 3119, in main
mainwindow = run_spyder(app, options, args)
File "/usr/local/lib/python3.5/site-packages/spyderlib/spyder.py", line 3005, in run_spyder
main.setup()
File "/usr/local/lib/python3.5/site-packages/spyderlib/spyder.py", line 822, in setup
message=_("Spyder Internal Console\n\n"
File "/usr/local/lib/python3.5/site-packages/spyderlib/plugins/console.py", line 79, in __init__
self.find_widget.set_editor(self.shell)
File "/usr/local/lib/python3.5/site-packages/spyderlib/widgets/findreplace.py", line 250, in set_editor
from spyderlib.qt.QtWebKit import QWebView
File "/usr/local/lib/python3.5/site-packages/spyderlib/qt/QtWebKit.py", line 10, in <module>
from PyQt5.QtWebKitWidgets import QWebPage, QWebView # analysis:ignore
ImportError: dlopen(/usr/local/lib/python3.5/site-packages/PyQt5/QtWebKitWidgets.so, 2): Library not loaded: /usr/local/opt/qt5/lib/QtWebKitWidgets.framework/Versions/5/QtWebKitWidgets
Referenced from: /usr/local/lib/python3.5/site-packages/PyQt5/QtWebKitWidgets.so
Reason: image not found
* Spyder Version: 3.0.0 beta2
* Python Version: 3.5.1
* Operating system: OS X 10.11 El Cap
|
main
|
spyder doesn t work with qtwebengine description of your problem i tried to launch spyder from command line with installed from homebrew sip and install from source code and spyder preview installed from pip when i launched it the error log said traceback most recent call last file usr local lib site packages spyderlib spyder py line in main mainwindow run spyder app options args file usr local lib site packages spyderlib spyder py line in run spyder main setup file usr local lib site packages spyderlib spyder py line in setup message spyder internal console n n file usr local lib site packages spyderlib plugins console py line in init self find widget set editor self shell file usr local lib site packages spyderlib widgets findreplace py line in set editor from spyderlib qt qtwebkit import qwebview file usr local lib site packages spyderlib qt qtwebkit py line in from qtwebkitwidgets import qwebpage qwebview analysis ignore importerror dlopen usr local lib site packages qtwebkitwidgets so library not loaded usr local opt lib qtwebkitwidgets framework versions qtwebkitwidgets referenced from usr local lib site packages qtwebkitwidgets so reason image not found spyder version python version operating system os x el cap
| 1
|
576
| 4,055,412,755
|
IssuesEvent
|
2016-05-24 15:24:18
|
spyder-ide/spyder
|
https://api.github.com/repos/spyder-ide/spyder
|
closed
|
Start testing with pytest/pytest-qt and coverage for Spyder
|
Type-Enhancement Type-Maintainability
|
I've never thought of creating a test suite because Spyder has a lot of moving parts, and it's hard to sit down and provide a test for each of them.
However, I've heard a lot lately that Spyder is considered unstable by several people. So I think we should start (at least) by testing that Spyder starts with the git-master versions of all its optional dependencies, to notice and fix startup crashes as quickly as we can.
I also just found this project
https://github.com/pytest-dev/pytest-qt
that seems to provide a good foundation to create a more comprehensive test suite.
Pinging @goanpeca and @blink1073 for discussion ;-)
|
True
|
Start testing with pytest/pytest-qt and coverage for Spyder - I've never thought of creating a test suite because Spyder has a lot of moving parts, and it's hard to sit down and provide a test for each of them.
However, I've heard a lot lately that Spyder is considered unstable by several people. So I think we should start (at least) by testing that Spyder starts with the git-master versions of all its optional dependencies, to notice and fix startup crashes as quickly as we can.
I also just found this project
https://github.com/pytest-dev/pytest-qt
that seems to provide a good foundation to create a more comprehensive test suite.
Pinging @goanpeca and @blink1073 for discussion ;-)
|
main
|
start testing with pytest pytest qt and coverage for spyder i ve never thought of creating a test suite because spyder has a lot of moving parts and it s hard to sit down and provide a test for each of them however i ve heard a lot lately that spyder is considered unstable by several people so i think we should start at least by testing that spyder starts with the git master versions of all its optional dependencies to notice and fix startup crashes as quickly as we can i also just found this project that seems to provide a good foundation to create a more comprehensive test suite pinging goanpeca and for discussion
| 1
|
825
| 4,461,291,886
|
IssuesEvent
|
2016-08-24 04:33:38
|
duckduckgo/zeroclickinfo-spice
|
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
|
closed
|
Forecast: Add % precipitation to answer
|
Improvement Maintainer Input Requested PR Received Suggestion
|
could show % precipitation to answer as this could be useful to some users.
------
IA Page: http://duck.co/ia/view/forecast
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @himanshu0113
|
True
|
Forecast: Add % precipitation to answer - could show % precipitation to answer as this could be useful to some users.
------
IA Page: http://duck.co/ia/view/forecast
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @himanshu0113
|
main
|
forecast add precipitation to answer could show precipitation to answer as this could be useful to some users ia page
| 1
|
5,532
| 27,658,483,586
|
IssuesEvent
|
2023-03-12 08:35:06
|
leonhard-s/ps2-api-docs
|
https://api.github.com/repos/leonhard-s/ps2-api-docs
|
closed
|
Consider simplifying OpenAPI `paths`
|
enhancement maintainability
|
We are currently listing the paths for each collection individually. This gives more flexibility with collection-specific parameters, but also adds a lot of boilerplate code. To be investigated.
|
True
|
Consider simplifying OpenAPI `paths` - We are currently listing the paths for each collection individually. This gives more flexibility with collection-specific parameters, but also adds a lot of boilerplate code. To be investigated.
|
main
|
consider simplifying openapi paths we are currently listing the paths for each collection individually this gives more flexibility with collection specific parameters but also adds a lot of boilerplate code to be investigated
| 1
|
509
| 3,868,521,446
|
IssuesEvent
|
2016-04-10 00:41:12
|
duckduckgo/zeroclickinfo-spice
|
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
|
closed
|
XKCD: Cached comics are outdated
|
Maintainer Approved
|
The ZCI shows another comic than the website. It is not up to date.
Didn't check the source, but it seems as if it is not able to update properly.
------
IA Page: http://duck.co/ia/view/xkcd
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sdball
|
True
|
XKCD: Cached comics are outdated - The ZCI shows another comic than the website. It is not up to date.
Didn't check the source, but it seems as if it is not able to update properly.
------
IA Page: http://duck.co/ia/view/xkcd
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @sdball
|
main
|
xkcd cached comics are outdated the zci shows another comic than the website it is not up to date didn t check the source but it seems as if it is not able to update properly ia page sdball
| 1
|
918
| 4,622,130,075
|
IssuesEvent
|
2016-09-27 06:01:08
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
Error bring dockercompose up
|
affects_2.1 bug_report cloud docker waiting_on_maintainer
|
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
docker_service
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
n/a
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 14.04
##### SUMMARY
<!--- Explain the problem briefly -->
I am trying to call docker-compose from Ansible using docker_service module and I get an error with very few details.
If I call docker-compose directly, everything works correctly.
Other tools versions:
Python 2.7.6
pip 8.1.2 from /usr/local/lib/python2.7/dist-packages (python 2.7)
docker_py-1.9.0
docker-compose version 1.7.1, build 6c29830
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
1. Have a playbook containing a docker_service module
2. Run playbook
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Start Docker-compose
docker_service:
project_src: ../docker-compose/
files: dev.yml
state: present
scale:
server1: 2
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Docker-compose starts all containers without errors.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Received following error (-vvvv) :
<!--- Paste verbatim command output between quotes below -->
```
TASK [Start Docker-compose] **************************************
task path: /home/andreeav/workspace/docker-workspace/ansible/start.yml:17
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: andreeav
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1474364184.49-149863168021526 `" && echo ansible-tmp-1474364184.49-149863168021526="` echo $HOME/.ansible/tmp/ansible-tmp-1474364184.49-149863168021526 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpzmVeR1 TO /home/andreeav/.ansible/tmp/ansible-tmp-1474364184.49-149863168021526/docker_service
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_GB.UTF-8 LC_ALL=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 /usr/bin/python /home/andreeav/.ansible/tmp/ansible-tmp-1474364184.49-149863168021526/docker_service; rm -rf "/home/andreeav/.ansible/tmp/ansible-tmp-1474364184.49-149863168021526/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"api_version": null, "build": true, "cacert_path": null, "cert_path": null, "debug": false, "definition": null, "dependencies": true, "docker_host": null, "files": ["dev.yml"], "filter_logger": false, "hostname_check": false, "key_path": null, "project_name": null, "project_src": "../docker-compose/", "recreate": "smart", "remove_images": null, "remove_orphans": false, "remove_volumes": false, "restarted": false, "scale": {"server1": 2}, "services": null, "ssl_version": null, "state": "present", "stopped": false, "timeout": null, "tls": null, "tls_hostname": null, "tls_verify": null}, "module_name": "docker_service"}, "msg": "Error bring dockercompose up - "}
```
|
True
|
Error bring dockercompose up - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
docker_service
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
n/a
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 14.04
##### SUMMARY
<!--- Explain the problem briefly -->
I am trying to call docker-compose from Ansible using docker_service module and I get an error with very few details.
If I call docker-compose directly, everything works correctly.
Other tools versions:
Python 2.7.6
pip 8.1.2 from /usr/local/lib/python2.7/dist-packages (python 2.7)
docker_py-1.9.0
docker-compose version 1.7.1, build 6c29830
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
1. Have a playbook containing a docker_service module
2. Run playbook
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Start Docker-compose
docker_service:
project_src: ../docker-compose/
files: dev.yml
state: present
scale:
server1: 2
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Docker-compose starts all containers without errors.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
Received following error (-vvvv) :
<!--- Paste verbatim command output between quotes below -->
```
TASK [Start Docker-compose] **************************************
task path: /home/andreeav/workspace/docker-workspace/ansible/start.yml:17
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: andreeav
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1474364184.49-149863168021526 `" && echo ansible-tmp-1474364184.49-149863168021526="` echo $HOME/.ansible/tmp/ansible-tmp-1474364184.49-149863168021526 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpzmVeR1 TO /home/andreeav/.ansible/tmp/ansible-tmp-1474364184.49-149863168021526/docker_service
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_GB.UTF-8 LC_ALL=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 /usr/bin/python /home/andreeav/.ansible/tmp/ansible-tmp-1474364184.49-149863168021526/docker_service; rm -rf "/home/andreeav/.ansible/tmp/ansible-tmp-1474364184.49-149863168021526/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"api_version": null, "build": true, "cacert_path": null, "cert_path": null, "debug": false, "definition": null, "dependencies": true, "docker_host": null, "files": ["dev.yml"], "filter_logger": false, "hostname_check": false, "key_path": null, "project_name": null, "project_src": "../docker-compose/", "recreate": "smart", "remove_images": null, "remove_orphans": false, "remove_volumes": false, "restarted": false, "scale": {"server1": 2}, "services": null, "ssl_version": null, "state": "present", "stopped": false, "timeout": null, "tls": null, "tls_hostname": null, "tls_verify": null}, "module_name": "docker_service"}, "msg": "Error bring dockercompose up - "}
```
|
main
|
error bring dockercompose up issue type bug report component name docker service ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary i am trying to call docker compose from ansible using docker service module and i get an error with very few details if i call docker compose directly everything works correctly other tools versions python pip from usr local lib dist packages python docker py docker compose version build steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used have a playbook containing a docker service module run playbook name start docker compose docker service project src docker compose files dev yml state present scale expected results docker compose starts all containers without errors actual results received following error vvvv task task path home andreeav workspace docker workspace ansible start yml establish local connection for user andreeav exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home andreeav ansible tmp ansible tmp docker service exec bin sh c lang en gb utf lc all en gb utf lc messages en gb utf usr bin python home andreeav ansible tmp ansible tmp docker service rm rf home andreeav ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args api version null build true cacert path null cert path null debug false definition null dependencies true docker host null files filter logger false hostname check false key path null project name null project src docker compose recreate smart remove images null remove orphans false remove volumes false restarted false scale services null ssl version null state present stopped false timeout null tls null tls hostname null tls verify null module name docker service msg error bring dockercompose up
| 1
|
121,202
| 15,867,620,181
|
IssuesEvent
|
2021-04-08 17:07:59
|
mozilla/foundation.mozilla.org
|
https://api.github.com/repos/mozilla/foundation.mozilla.org
|
closed
|
[GA Dashboard] Brownbag session
|
design
|
Do a brownbag session on GA dashboard for the design team so everybody knows how to create their own dashboard.
We can do a Content Dashboard as example and the final product can then be used by Xavier and content team.
**To do:**
- [x] Think about what would be useful to have in the dashboard - ask Xavier what he would like to see
- [x] Do a dashboard
- [x] Deconstruct the dashboard with the team in a presentation
- [x] They can try to reproduce the same dashboard in the session, by themselves.
|
1.0
|
[GA Dashboard] Brownbag session - Do a brownbag session on GA dashboard for the design team so everybody knows how to create their own dashboard.
We can do a Content Dashboard as example and the final product can then be used by Xavier and content team.
**To do:**
- [x] Think about what would be useful to have in the dashboard - ask Xavier what he would like to see
- [x] Do a dashboard
- [x] Deconstruct the dashboard with the team in a presentation
- [x] They can try to reproduce the same dashboard in the session, by themselves.
|
non_main
|
brownbag session do a brownbag session on ga dashboard for the design team so everybody knows how to create their own dashboard we can do a content dashboard as example and the final product can then be used by xavier and content team to do think about what would be useful to have in the dashboard ask xavier what he would like to see do a dashboard deconstruct the dashboard with the team in a presentation they can try to reproduce the same dashboard in the session by themselves
| 0
|
221,579
| 17,359,049,267
|
IssuesEvent
|
2021-07-29 17:52:41
|
nasa/cFE
|
https://api.github.com/repos/nasa/cFE
|
closed
|
Hard coded time print format checks fail when non-default epoch is used
|
unit-test
|
**Is your feature request related to a problem? Please describe.**
Epoch is configurable:
https://github.com/nasa/cFE/blob/063b4d8a9c4a7e822af5f3e4017599159b985bb0/cmake/sample_defs/sample_mission_cfg.h#L186-L190
Time unit tests hard-code checks that are impacted by epoch configuration, and fail when it's changed (example):
https://github.com/nasa/cFE/blob/063b4d8a9c4a7e822af5f3e4017599159b985bb0/modules/time/ut-coverage/time_UT.c#L398-L424
**Describe the solution you'd like**
Update tests to work with configured epoch. Either adjust for configured epoch or test the actual values (not print time).
**Describe alternatives you've considered**
None
**Additional context**
None
**Requester Info**
Jacob Hageman - NASA/GSFC, @excaliburtb
|
1.0
|
Hard coded time print format checks fail when non-default epoch is used - **Is your feature request related to a problem? Please describe.**
Epoch is configurable:
https://github.com/nasa/cFE/blob/063b4d8a9c4a7e822af5f3e4017599159b985bb0/cmake/sample_defs/sample_mission_cfg.h#L186-L190
Time unit tests hard-code checks that are impacted by epoch configuration, and fail when it's changed (example):
https://github.com/nasa/cFE/blob/063b4d8a9c4a7e822af5f3e4017599159b985bb0/modules/time/ut-coverage/time_UT.c#L398-L424
**Describe the solution you'd like**
Update tests to work with configured epoch. Either adjust for configured epoch or test the actual values (not print time).
**Describe alternatives you've considered**
None
**Additional context**
None
**Requester Info**
Jacob Hageman - NASA/GSFC, @excaliburtb
|
non_main
|
hard coded time print format checks fail when non default epoch is used is your feature request related to a problem please describe epoch is configurable time unit tests hard code checks that are impacted by epoch configuration and fail when it s changed example describe the solution you d like update tests to work with configured epoch either adjust for configured epoch or test the actual values not print time describe alternatives you ve considered none additional context none requester info jacob hageman nasa gsfc excaliburtb
| 0
|
701,391
| 24,096,413,492
|
IssuesEvent
|
2022-09-19 19:10:37
|
RobotLocomotion/drake
|
https://api.github.com/repos/RobotLocomotion/drake
|
opened
|
Release 1.8.0 apt, docker, S3
|
component: distribution priority: high
|
Please manually tag docker images and upload the releases to S3:
https://github.com/RobotLocomotion/drake/releases/tag/v1.8.0
|
1.0
|
Release 1.8.0 apt, docker, S3 - Please manually tag docker images and upload the releases to S3:
https://github.com/RobotLocomotion/drake/releases/tag/v1.8.0
|
non_main
|
release apt docker please manually tag docker images and upload the releases to
| 0
|
114,742
| 17,260,159,413
|
IssuesEvent
|
2021-07-22 06:11:19
|
Chiencc/WS-Bolt
|
https://api.github.com/repos/Chiencc/WS-Bolt
|
opened
|
CVE-2021-29425 (Medium) detected in commons-io-2.0.1.jar
|
security vulnerability
|
## CVE-2021-29425 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-io-2.0.1.jar</b></p></summary>
<p>Commons-IO contains utility classes, stream implementations, file filters, file comparators and endian classes.</p>
<p>Library home page: <a href="http://commons.apache.org/io/">http://commons.apache.org/io/</a></p>
<p>Path to dependency file: WS-Bolt/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-io/commons-io/2.0.1/commons-io-2.0.1.jar</p>
<p>
Dependency Hierarchy:
- struts2-core-2.3.15.jar (Root Library)
- :x: **commons-io-2.0.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Chiencc/WS-Bolt/commit/82c63bfcaf0b25bcec799209e53233e265b5955f">82c63bfcaf0b25bcec799209e53233e265b5955f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like "//../foo", or "\\..\foo", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus "limited" path traversal), if the calling code would use the result to construct a path value.
<p>Publish Date: 2021-04-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29425>CVE-2021-29425</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425</a></p>
<p>Release Date: 2021-04-13</p>
<p>Fix Resolution: commons-io:commons-io:2.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-29425 (Medium) detected in commons-io-2.0.1.jar - ## CVE-2021-29425 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-io-2.0.1.jar</b></p></summary>
<p>Commons-IO contains utility classes, stream implementations, file filters, file comparators and endian classes.</p>
<p>Library home page: <a href="http://commons.apache.org/io/">http://commons.apache.org/io/</a></p>
<p>Path to dependency file: WS-Bolt/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-io/commons-io/2.0.1/commons-io-2.0.1.jar</p>
<p>
Dependency Hierarchy:
- struts2-core-2.3.15.jar (Root Library)
- :x: **commons-io-2.0.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Chiencc/WS-Bolt/commit/82c63bfcaf0b25bcec799209e53233e265b5955f">82c63bfcaf0b25bcec799209e53233e265b5955f</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Commons IO before 2.7, When invoking the method FileNameUtils.normalize with an improper input string, like "//../foo", or "\\..\foo", the result would be the same value, thus possibly providing access to files in the parent directory, but not further above (thus "limited" path traversal), if the calling code would use the result to construct a path value.
<p>Publish Date: 2021-04-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29425>CVE-2021-29425</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-29425</a></p>
<p>Release Date: 2021-04-13</p>
<p>Fix Resolution: commons-io:commons-io:2.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_main
|
cve medium detected in commons io jar cve medium severity vulnerability vulnerable library commons io jar commons io contains utility classes stream implementations file filters file comparators and endian classes library home page a href path to dependency file ws bolt pom xml path to vulnerable library home wss scanner repository commons io commons io commons io jar dependency hierarchy core jar root library x commons io jar vulnerable library found in head commit a href found in base branch main vulnerability details in apache commons io before when invoking the method filenameutils normalize with an improper input string like foo or foo the result would be the same value thus possibly providing access to files in the parent directory but not further above thus limited path traversal if the calling code would use the result to construct a path value publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution commons io commons io step up your open source security game with whitesource
| 0
|
833
| 4,470,135,519
|
IssuesEvent
|
2016-08-25 15:04:08
|
duckduckgo/zeroclickinfo-goodies
|
https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies
|
closed
|
Least Common Multiple:
|
Maintainer Input Requested
|
For some reason. The web search doesnt display the least common multiple IA page.
https://duckduckgo.com/?q=lcm+8+4&t=hi&ia=web
------
IA Page: http://duck.co/ia/view/least_common_multiple
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @rpatel3001
|
True
|
Least Common Multiple: - For some reason. The web search doesnt display the least common multiple IA page.
https://duckduckgo.com/?q=lcm+8+4&t=hi&ia=web
------
IA Page: http://duck.co/ia/view/least_common_multiple
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @rpatel3001
|
main
|
least common multiple for some reason the web search doesnt display the least common multiple ia page ia page
| 1
|
11,678
| 3,214,426,057
|
IssuesEvent
|
2015-10-07 01:48:12
|
affiliatewp/AffiliateWP
|
https://api.github.com/repos/affiliatewp/AffiliateWP
|
closed
|
Prevent spaces in affiliate usernames.
|
bug needs testing
|
If an affiliate has a WordPress username with a space in it, eg `Dat Guy` then their affiliate referral URL won't work:
`http://mysite.com/ref/Dat Guy`
or
`http://mysite.com/ref/Dat%20Guy`
WordPress itself doesn't let you register a username with a space in it however our affiliate registration form does. We should prevent spaces in usernames.
ref: https://secure.helpscout.net/conversation/121113989/17016/

|
1.0
|
Prevent spaces in affiliate usernames. - If an affiliate has a WordPress username with a space in it, eg `Dat Guy` then their affiliate referral URL won't work:
`http://mysite.com/ref/Dat Guy`
or
`http://mysite.com/ref/Dat%20Guy`
WordPress itself doesn't let you register a username with a space in it however our affiliate registration form does. We should prevent spaces in usernames.
ref: https://secure.helpscout.net/conversation/121113989/17016/

|
non_main
|
prevent spaces in affiliate usernames if an affiliate has a wordpress username with a space in it eg dat guy then their affiliate referral url won t work guy or wordpress itself doesn t let you register a username with a space in it however our affiliate registration form does we should prevent spaces in usernames ref
| 0
|
389,998
| 11,520,478,847
|
IssuesEvent
|
2020-02-14 14:53:09
|
bryntum/support
|
https://api.github.com/repos/bryntum/support
|
closed
|
Cannot type into duration field of new task
|
bug high-priority resolved
|
Open basic demo
Add new task with:
`gantt.taskStore.rootNode.appendChild({name: 'New task'});`
Right click -> Edit
Advanced tab -> set a constraint type
General tab -> Try typing in Duration field, nothing happens
|
1.0
|
Cannot type into duration field of new task - Open basic demo
Add new task with:
`gantt.taskStore.rootNode.appendChild({name: 'New task'});`
Right click -> Edit
Advanced tab -> set a constraint type
General tab -> Try typing in Duration field, nothing happens
|
non_main
|
cannot type into duration field of new task open basic demo add new task with gantt taskstore rootnode appendchild name new task right click edit advanced tab set a constraint type general tab try typing in duration field nothing happens
| 0
|
66,342
| 27,421,128,437
|
IssuesEvent
|
2023-03-01 16:47:40
|
cityofaustin/atd-data-tech
|
https://api.github.com/repos/cityofaustin/atd-data-tech
|
closed
|
Prevent automatic assignment of ATD Banner Permits to DSD task lists in AMANDA
|
Service: Apps Need: 2-Should Have Workgroup: SMO Type: Enhancement Product: AMANDA
|
<!-- Email -->
<!-- mirna.garcia@austintexas.gov -->
> What application are you using?
AMANDA
> Describe the problem.
ATD's Banner program within the Smart Mobility Office creates new permits in AMANDA to process over-the-street and lamppost banner reservations submitted through Knack. However, once a permit has been created in AMANDA, the permit is automatically assigned to Heather Parajuli in Development Services. How can we prevent future ATD Banner permits from being automatically assigned to DSD task lists for review? Is there a step in this process that can remove the automatic assignment of the permit to DSD? Or perhaps change ATD SMO process when creating a new permit? The similarities between ATD and DSD Banner permits are the initial abbreviations when creating the new permit in AMANDA (SB) in the "Permit Type" field. Would changing this prevent the permits from going to DSD?
> Describe the outcome you'd like to see when this feature is implemented.
Ensure that ATD Smart Mobility Office Banner permits are no longer assigned to DSD queues/task lists as this is a different department that is not involved in ATD Banner permit reviews.
> Describe any workarounds you currently have in place or alternative solutions you've considered.
There are currently no workarounds to avoid the issue.
> Is there anything else we should know?
As soon as possible
> Requested By
Mirna G.
Request ID: DTS23-106065
|
1.0
|
Prevent automatic assignment of ATD Banner Permits to DSD task lists in AMANDA - <!-- Email -->
<!-- mirna.garcia@austintexas.gov -->
> What application are you using?
AMANDA
> Describe the problem.
ATD's Banner program within the Smart Mobility Office creates new permits in AMANDA to process over-the-street and lamppost banner reservations submitted through Knack. However, once a permit has been created in AMANDA, the permit is automatically assigned to Heather Parajuli in Development Services. How can we prevent future ATD Banner permits from being automatically assigned to DSD task lists for review? Is there a step in this process that can remove the automatic assignment of the permit to DSD? Or perhaps change ATD SMO process when creating a new permit? The similarities between ATD and DSD Banner permits are the initial abbreviations when creating the new permit in AMANDA (SB) in the "Permit Type" field. Would changing this prevent the permits from going to DSD?
> Describe the outcome you'd like to see when this feature is implemented.
Ensure that ATD Smart Mobility Office Banner permits are no longer assigned to DSD queues/task lists as this is a different department that is not involved in ATD Banner permit reviews.
> Describe any workarounds you currently have in place or alternative solutions you've considered.
There are currently no workarounds to avoid the issue.
> Is there anything else we should know?
As soon as possible
> Requested By
Mirna G.
Request ID: DTS23-106065
|
non_main
|
prevent automatic assignment of atd banner permits to dsd task lists in amanda what application are you using amanda describe the problem atd s banner program within the smart mobility office creates new permits in amanda to process over the street and lamppost banner reservations submitted through knack however once a permit has been created in amanda the permit is automatically assigned to heather parajuli in development services how can we prevent future atd banner permits from being automatically assigned to dsd task lists for review is there a step in this process that can remove the automatic assignment of the permit to dsd or perhaps change atd smo process when creating a new permit the similarities between atd and dsd banner permits are the initial abbreviations when creating the new permit in amanda sb in the permit type field would changing this prevent the permits from going to dsd describe the outcome you d like to see when this feature is implemented ensure that atd smart mobility office banner permits are no longer assigned to dsd queues task lists as this is a different department that is not involved in atd banner permit reviews describe any workarounds you currently have in place or alternative solutions you ve considered there are currently no workarounds to avoid the issue is there anything else we should know as soon as possible requested by mirna g request id
| 0
|
103,627
| 16,602,942,845
|
IssuesEvent
|
2021-06-01 22:18:25
|
gms-ws-sandbox/nibrs
|
https://api.github.com/repos/gms-ws-sandbox/nibrs
|
opened
|
CVE-2020-10673 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2020-10673 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.8.jar</b>, <b>jackson-databind-2.9.5.jar</b>, <b>jackson-databind-2.8.10.jar</b>, <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-validate-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.10.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/jackson-databind-2.8.10.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.10/jackson-databind-2.8.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.10.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-xmlfile/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.caucho.config.types.ResourceRef (aka caucho-quercus).
<p>Publish Date: 2020-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10673>CVE-2020-10673</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2660">https://github.com/FasterXML/jackson-databind/issues/2660</a></p>
<p>Release Date: 2020-03-18</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-json:2.1.5.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.5","packageFilePaths":["/tools/nibrs-validate-common/pom.xml","/tools/nibrs-flatfile/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.9.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.10","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-route/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-validation/pom.xml","/web/nibrs-web/pom.xml","/tools/nibrs-staging-data/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-10673","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.caucho.config.types.ResourceRef (aka caucho-quercus).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10673","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-10673 (High) detected in multiple libraries - ## CVE-2020-10673 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.8.jar</b>, <b>jackson-databind-2.9.5.jar</b>, <b>jackson-databind-2.8.10.jar</b>, <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.5.RELEASE.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-validate-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- tika-parsers-1.18.jar (Root Library)
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.10.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p>
<p>Path to vulnerable library: nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/jackson-databind-2.8.10.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.10/jackson-databind-2.8.10.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.10.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: nibrs/tools/nibrs-xmlfile/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,nibrs/web/nibrs-web/target/nibrs-web/WEB-INF/lib/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.caucho.config.types.ResourceRef (aka caucho-quercus).
<p>Publish Date: 2020-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10673>CVE-2020-10673</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2660">https://github.com/FasterXML/jackson-databind/issues/2660</a></p>
<p>Release Date: 2020-03-18</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.8","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-json:2.1.5.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.9.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.5","packageFilePaths":["/tools/nibrs-validate-common/pom.xml","/tools/nibrs-flatfile/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.tika:tika-parsers:1.18;com.fasterxml.jackson.core:jackson-databind:2.9.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.10","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.10","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"},{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/tools/nibrs-xmlfile/pom.xml","/tools/nibrs-staging-data-common/pom.xml","/tools/nibrs-route/pom.xml","/tools/nibrs-summary-report/pom.xml","/tools/nibrs-validation/pom.xml","/web/nibrs-web/pom.xml","/tools/nibrs-staging-data/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-10673","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.caucho.config.types.ResourceRef (aka caucho-quercus).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10673","cvss3Severity":"high","cvss3Score":"8.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_main
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs validate common pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy tika parsers jar root library x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs fbi service pom xml path to vulnerable library nibrs tools nibrs fbi service target nibrs fbi service web inf lib jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file nibrs tools nibrs xmlfile pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar nibrs web nibrs web target nibrs web web inf lib jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com caucho config types resourceref aka caucho quercus publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter json release com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency true dependencytree org apache tika tika parsers com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind packagetype java groupid com fasterxml jackson core packagename jackson databind packageversion packagefilepaths istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion com fasterxml jackson core jackson databind basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com caucho config types resourceref aka caucho quercus vulnerabilityurl
| 0
|
48,656
| 13,184,711,821
|
IssuesEvent
|
2020-08-12 19:57:20
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
Multiple problems with I3Frame serialization (Trac #13)
|
IceTray Incomplete Migration Migrated from Trac defect
|
<details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/13
, reported by troy and owned by troy_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": " * crc calculation is in the wrong place\n * unnecessary buffer copies\n * crc calculation is incorrect (it is consistent on read/load, but some objects are not actually checksummed) ",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "IceTray",
"summary": "Multiple problems with I3Frame serialization",
"priority": "normal",
"keywords": "",
"time": "2007-06-03T16:31:37",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Multiple problems with I3Frame serialization (Trac #13) - <details>
<summary>_Migrated from https://code.icecube.wisc.edu/ticket/13
, reported by troy and owned by troy_</summary>
<p>
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": " * crc calculation is in the wrong place\n * unnecessary buffer copies\n * crc calculation is incorrect (it is consistent on read/load, but some objects are not actually checksummed) ",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "IceTray",
"summary": "Multiple problems with I3Frame serialization",
"priority": "normal",
"keywords": "",
"time": "2007-06-03T16:31:37",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
</p>
</details>
|
non_main
|
multiple problems with serialization trac migrated from reported by troy and owned by troy json status closed changetime description crc calculation is in the wrong place n unnecessary buffer copies n crc calculation is incorrect it is consistent on read load but some objects are not actually checksummed reporter troy cc resolution fixed ts component icetray summary multiple problems with serialization priority normal keywords time milestone owner troy type defect
| 0
|
221,405
| 7,382,769,577
|
IssuesEvent
|
2018-03-15 06:50:27
|
Cloud-CV/EvalAI
|
https://api.github.com/repos/Cloud-CV/EvalAI
|
closed
|
UI Improvements in Leaderboard
|
GSOC easy_to_fix enhancement frontend good first issue priority-high
|
- [ ] Make the team name as **Bold** on leaderboard.
- [ ] Make `Rank` and `Participant Team` field as left aligned including the title.
- [ ] Make all the other fields as right aligned including the title.
- [ ] In the heading `submission time`, change the text to `Last Submission at` & display the delta between the current time and the last submission time. For Example: If the current time is 10 am, & last submission was made at 6 am, then the field should display `4h` (10am - 6am).
- [ ] Add a border line after every entry in the leaderboard.
Please feel free to ask if you have further doubts or queries.
|
1.0
|
UI Improvements in Leaderboard - - [ ] Make the team name as **Bold** on leaderboard.
- [ ] Make `Rank` and `Participant Team` field as left aligned including the title.
- [ ] Make all the other fields as right aligned including the title.
- [ ] In the heading `submission time`, change the text to `Last Submission at` & display the delta between the current time and the last submission time. For Example: If the current time is 10 am, & last submission was made at 6 am, then the field should display `4h` (10am - 6am).
- [ ] Add a border line after every entry in the leaderboard.
Please feel free to ask if you have further doubts or queries.
|
non_main
|
ui improvements in leaderboard make the team name as bold on leaderboard make rank and participant team field as left aligned including the title make all the other fields as right aligned including the title in the heading submission time change the text to last submission at display the delta between the current time and the last submission time for example if the current time is am last submission was made at am then the field should display add a border line after every entry in the leaderboard please feel free to ask if you have further doubts or queries
| 0
|
5,238
| 26,552,651,790
|
IssuesEvent
|
2023-01-20 09:17:25
|
OpenRefine/OpenRefine
|
https://api.github.com/repos/OpenRefine/OpenRefine
|
closed
|
Remove backend commands exposed for each operation
|
enhancement maintainability
|
So far, each operation that the user can run on a project comes with the following Java classes in the backend:
* an Operation class, which holds the metadata for the operation and is responsible for its JSON serialization (which is exposed in the history tab, among others)
* a Change class (often reused by different operations), which is responsible for actually applying the operation to the project (carrying out the corresponding transformation)
* a Command class, which exposes an HTTP API to initiate the operation on a project.
Therefore, each operation comes with its own HTTP endpoint to apply it, and the frontend can call that endpoint when the user clicks on some menu item or validates some dialog, for instance.
At the same time, we have a generic endpoint to apply a series of operations, represented by their JSON serialization, on a given project. This is used in the history tab, to let the user apply a workflow exported in JSON.
This endpoint (called `apply-operations`) makes all the other operation-specific commands redundant: we should be able to use it to apply any single operation, hence removing the need for a fairly large number of other commands.
This would have the following benefits:
* remove a lot of Java classes
* refactor the frontend so that making any change to the project goes through the same code, making it easier to implement logic that should apply to all those changes (such as warning the user that they are discarding a part of the history if there are "future" history entries: #3184)
* make it easier to implement new operations, by removing the need to implement a corresponding command (but the move can be done without any breaking change with respect to the extension interface - extensions are still free to define their own commands for each operation they define).
This will likely require improving the error handling in the `apply-operations` endpoint, since it is likely that operation-specific endpoints currently offer a better handling. When an operation's JSON representation is incomplete or invalid, we want to be able to return an appropriate response which should be replicated in the frontend accordingly. This will benefit the "Apply operations" UX as well.
|
True
|
Remove backend commands exposed for each operation - So far, each operation that the user can run on a project comes with the following Java classes in the backend:
* an Operation class, which holds the metadata for the operation and is responsible for its JSON serialization (which is exposed in the history tab, among others)
* a Change class (often reused by different operations), which is responsible for actually applying the operation to the project (carrying out the corresponding transformation)
* a Command class, which exposes an HTTP API to initiate the operation on a project.
Therefore, each operation comes with its own HTTP endpoint to apply it, and the frontend can call that endpoint when the user clicks on some menu item or validates some dialog, for instance.
At the same time, we have a generic endpoint to apply a series of operations, represented by their JSON serialization, on a given project. This is used in the history tab, to let the user apply a workflow exported in JSON.
This endpoint (called `apply-operations`) makes all the other operation-specific commands redundant: we should be able to use it to apply any single operation, hence removing the need for a fairly large number of other commands.
This would have the following benefits:
* remove a lot of Java classes
* refactor the frontend so that making any change to the project goes through the same code, making it easier to implement logic that should apply to all those changes (such as warning the user that they are discarding a part of the history if there are "future" history entries: #3184)
* make it easier to implement new operations, by removing the need to implement a corresponding command (but the move can be done without any breaking change with respect to the extension interface - extensions are still free to define their own commands for each operation they define).
This will likely require improving the error handling in the `apply-operations` endpoint, since it is likely that operation-specific endpoints currently offer a better handling. When an operation's JSON representation is incomplete or invalid, we want to be able to return an appropriate response which should be replicated in the frontend accordingly. This will benefit the "Apply operations" UX as well.
|
main
|
remove backend commands exposed for each operation so far each operation that the user can run on a project comes with the following java classes in the backend an operation class which holds the metadata for the operation and is responsible for its json serialization which is exposed in the history tab among others a change class often reused by different operations which is responsible for actually applying the operation to the project carrying out the corresponding transformation a command class which exposes an http api to initiate the operation on a project therefore each operation comes with its own http endpoint to apply it and the frontend can call that endpoint when the user clicks on some menu item or validates some dialog for instance at the same time we have a generic endpoint to apply a series of operations represented by their json serialization on a given project this is used in the history tab to let the user apply a workflow exported in json this endpoint called apply operations makes all the other operation specific commands redundant we should be able to use it to apply any single operation hence removing the need for a fairly large number of other commands this would have the following benefits remove a lot of java classes refactor the frontend so that making any change to the project goes through the same code making it easier to implement logic that should apply to all those changes such as warning the user that they are discarding a part of the history if there are future history entries make it easier to implement new operations by removing the need to implement a corresponding command but the move can be done without any breaking change with respect to the extension interface extensions are still free to define their own commands for each operation they define this will likely require improving the error handling in the apply operations endpoint since it is likely that operation specific endpoints currently offer a better handling when an operation s json representation is incomplete or invalid we want to be able to return an appropriate response which should be replicated in the frontend accordingly this will benefit the apply operations ux as well
| 1
|
5,240
| 26,561,871,536
|
IssuesEvent
|
2023-01-20 16:30:54
|
mozilla/foundation.mozilla.org
|
https://api.github.com/repos/mozilla/foundation.mozilla.org
|
opened
|
Wagtail 4 upgrade
|
engineering maintain needs grooming
|
### Description
Following on from #9674 - we want to upgrade Wagtail to version 4.0 (or possibly the latest version)
When it comes to upgrading, it's best to follow the increments in the versions. So to upgrade we will want to run the upgrades in the following order (ref)[https://docs.wagtail.org/en/stable/releases/index.html]:
- [ ] upgrade to 3.0 1 then run test suite
- [ ] upgrade to 3.0 2 then run test suite
- [ ] upgrade to 3.0 3 then run test suite
- [ ] upgrade to 4.0 then run test suite
- ... etc
It's a good idea to create single commits for each of the upgrade steps.
### What needs addressing for the upgrade to version 4
What needs to be addressed is explained in the [docs](https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations) but for clarity here is a brief outline:
- [ ] Check for and `Page.serve()` overrides and fix accordingly
- [ ] Live preview panel X-Frame-Options header [ref](https://docs.wagtail.org/en/stable/releases/4.0.html#opening-links-within-the-live-preview-panel)
- [ ] The PageRevision model has been replaced with a generic Revision model. Check for use of PageRevision
- [ ] Multiple method/class naming updates and replacements - E.G BaseSetting replaced by BaseSiteSetting
### Additional context
- Wagtail 4 release notes upgrade considerations https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations
### Developer notes
- [ ] Create an upgrade branch from main
- [ ] Check your project’s console output for any deprecation warnings, and fix them where necessary `python -Wa manage.py check`
- [ ] Check the new version’s release notes https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations
- [ ] Check the compatible Django / Python versions [table](https://docs.wagtail.io/en/stable/releases/upgrading.html#compatible-django-python-versions), for any dependencies that need upgrading first;
- [ ] Upgrade supporting requirements (Python, Django) if necessary
- [ ] Upgrade Wagtail
- [ ] Make new migration (might result in none).
- [ ] Migrate database changes (locally)
- [ ] Implement needed changes from upgrade considerations (see above)
- [ ] Perform testing
- [ ] Run test suites
- [ ] Smoke test site / testing journeys (manually on the site)
- [ ] Smoke test admin (Click around in the admin to see if anything is broken)
- [ ] Check for new deprecations `python -Wa manage.py check` and fix if necessary
## Acceptance criteria
- [ ] Wagtail is upgraded to (at least) version 4.0
|
True
|
Wagtail 4 upgrade - ### Description
Following on from #9674 - we want to upgrade Wagtail to version 4.0 (or possibly the latest version)
When it comes to upgrading, it's best to follow the increments in the versions. So to upgrade we will want to run the upgrades in the following order (ref)[https://docs.wagtail.org/en/stable/releases/index.html]:
- [ ] upgrade to 3.0 1 then run test suite
- [ ] upgrade to 3.0 2 then run test suite
- [ ] upgrade to 3.0 3 then run test suite
- [ ] upgrade to 4.0 then run test suite
- ... etc
It's a good idea to create single commits for each of the upgrade steps.
### What needs addressing for the upgrade to version 4
What needs to be addressed is explained in the [docs](https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations) but for clarity here is a brief outline:
- [ ] Check for and `Page.serve()` overrides and fix accordingly
- [ ] Live preview panel X-Frame-Options header [ref](https://docs.wagtail.org/en/stable/releases/4.0.html#opening-links-within-the-live-preview-panel)
- [ ] The PageRevision model has been replaced with a generic Revision model. Check for use of PageRevision
- [ ] Multiple method/class naming updates and replacements - E.G BaseSetting replaced by BaseSiteSetting
### Additional context
- Wagtail 4 release notes upgrade considerations https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations
### Developer notes
- [ ] Create an upgrade branch from main
- [ ] Check your project’s console output for any deprecation warnings, and fix them where necessary `python -Wa manage.py check`
- [ ] Check the new version’s release notes https://docs.wagtail.org/en/stable/releases/4.0.html#upgrade-considerations
- [ ] Check the compatible Django / Python versions [table](https://docs.wagtail.io/en/stable/releases/upgrading.html#compatible-django-python-versions), for any dependencies that need upgrading first;
- [ ] Upgrade supporting requirements (Python, Django) if necessary
- [ ] Upgrade Wagtail
- [ ] Make new migration (might result in none).
- [ ] Migrate database changes (locally)
- [ ] Implement needed changes from upgrade considerations (see above)
- [ ] Perform testing
- [ ] Run test suites
- [ ] Smoke test site / testing journeys (manually on the site)
- [ ] Smoke test admin (Click around in the admin to see if anything is broken)
- [ ] Check for new deprecations `python -Wa manage.py check` and fix if necessary
## Acceptance criteria
- [ ] Wagtail is upgraded to (at least) version 4.0
|
main
|
wagtail upgrade description following on from we want to upgrade wagtail to version or possibly the latest version when it comes to upgrading it s best to follow the increments in the versions so to upgrade we will want to run the upgrades in the following order ref upgrade to then run test suite upgrade to then run test suite upgrade to then run test suite upgrade to then run test suite etc it s a good idea to create single commits for each of the upgrade steps what needs addressing for the upgrade to version what needs to be addressed is explained in the but for clarity here is a brief outline check for and page serve overrides and fix accordingly live preview panel x frame options header the pagerevision model has been replaced with a generic revision model check for use of pagerevision multiple method class naming updates and replacements e g basesetting replaced by basesitesetting additional context wagtail release notes upgrade considerations developer notes create an upgrade branch from main check your project’s console output for any deprecation warnings and fix them where necessary python wa manage py check check the new version’s release notes check the compatible django python versions for any dependencies that need upgrading first upgrade supporting requirements python django if necessary upgrade wagtail make new migration might result in none migrate database changes locally implement needed changes from upgrade considerations see above perform testing run test suites smoke test site testing journeys manually on the site smoke test admin click around in the admin to see if anything is broken check for new deprecations python wa manage py check and fix if necessary acceptance criteria wagtail is upgraded to at least version
| 1
|
4,634
| 23,985,465,460
|
IssuesEvent
|
2022-09-13 18:38:40
|
aws/aws-sam-cli-app-templates
|
https://api.github.com/repos/aws/aws-sam-cli-app-templates
|
closed
|
Feature request: Replace unittest TestCase with Pytest in hello-python templates
|
maintainer/need-followup type/feature stage/waiting-for-release
|
### Describe your idea/feature/enhancement
The test code in cookiecutter-aws-sam-hello-python is a somewhat confusing mix of pytest and unittest tests. I suggest we standardize on pytest and use fixtures in both the unit and integration tests.
eg
### Current Unit Tests
Are pytest flavour
```python
@pytest.fixture()
def apigw_event():
""" Generates API GW Event"""
return {
"body": '{ "test": "body"}'
...
}
def test_lambda_handler(apigw_event, mocker):
ret = app.lambda_handler(apigw_event, "")
```
#### Current Integration Tests
Are unittest flavour
``` python
class TestApiGateway(TestCase):
api_endpoint: str
def setUp(self) -> None:
....
```
### Proposal
Standardize on pytest in the integration test examples within cookiecutter-aws-sam-hello-python across python 3.7, 3.8 and 3.9
### Additional Details
I'm happy to implement this if the community gives it a thumbs up
|
True
|
Feature request: Replace unittest TestCase with Pytest in hello-python templates - ### Describe your idea/feature/enhancement
The test code in cookiecutter-aws-sam-hello-python is a somewhat confusing mix of pytest and unittest tests. I suggest we standardize on pytest and use fixtures in both the unit and integration tests.
eg
### Current Unit Tests
Are pytest flavour
```python
@pytest.fixture()
def apigw_event():
""" Generates API GW Event"""
return {
"body": '{ "test": "body"}'
...
}
def test_lambda_handler(apigw_event, mocker):
ret = app.lambda_handler(apigw_event, "")
```
#### Current Integration Tests
Are unittest flavour
``` python
class TestApiGateway(TestCase):
api_endpoint: str
def setUp(self) -> None:
....
```
### Proposal
Standardize on pytest in the integration test examples within cookiecutter-aws-sam-hello-python across python 3.7, 3.8 and 3.9
### Additional Details
I'm happy to implement this if the community gives it a thumbs up
|
main
|
feature request replace unittest testcase with pytest in hello python templates describe your idea feature enhancement the test code in cookiecutter aws sam hello python is a somewhat confusing mix of pytest and unittest tests i suggest we standardize on pytest and use fixtures in both the unit and integration tests eg current unit tests are pytest flavour python pytest fixture def apigw event generates api gw event return body test body def test lambda handler apigw event mocker ret app lambda handler apigw event current integration tests are unittest flavour python class testapigateway testcase api endpoint str def setup self none proposal standardize on pytest in the integration test examples within cookiecutter aws sam hello python across python and additional details i m happy to implement this if the community gives it a thumbs up
| 1
|
43,149
| 23,132,310,974
|
IssuesEvent
|
2022-07-28 11:31:56
|
woocommerce/woocommerce-android
|
https://api.github.com/repos/woocommerce/woocommerce-android
|
opened
|
Integrate Sentry Performance Monitoring
|
type: task category: performance
|
This issue is an epic for Sentry Performance Monitoring integration.
Tracks scope:
- https://github.com/Automattic/Automattic-Tracks-Android/pull/133
- https://github.com/Automattic/Automattic-Tracks-Android/pull/137
- https://github.com/Automattic/Automattic-Tracks-Android/pull/136
Woo scope:
- tbd
|
True
|
Integrate Sentry Performance Monitoring - This issue is an epic for Sentry Performance Monitoring integration.
Tracks scope:
- https://github.com/Automattic/Automattic-Tracks-Android/pull/133
- https://github.com/Automattic/Automattic-Tracks-Android/pull/137
- https://github.com/Automattic/Automattic-Tracks-Android/pull/136
Woo scope:
- tbd
|
non_main
|
integrate sentry performance monitoring this issue is an epic for sentry performance monitoring integration tracks scope woo scope tbd
| 0
|
4,117
| 15,526,359,641
|
IssuesEvent
|
2021-03-13 00:57:04
|
BCDevOps/OpenShift4-RollOut
|
https://api.github.com/repos/BCDevOps/OpenShift4-RollOut
|
closed
|
OCP KLAB2 - Bootstrap KLAB2 cluster
|
env/lab team/DXC tech/automation tech/provisioning
|
**Describe the issue**
This issue tracks effort spent bootstrapping the KLAB2 cluster, starting with the dedicated BOOTSTRAP VM (can re-use the one from KLAB) until all of the nodes are bootstrapped into operation.
**Which Sprint Goal is this issue related to?**
**Additional context**
**Definition of done Checklist (where applicable)**
- [x] Bootstrap BOOTSTRAP VM
- [x] Bootstrap master nodes and confirm when BOOTSTRAP VM can be shut down.
- [x] Bootstrap infra nodes and confirm when openshift-install is complete. Approve CSRs as needed.
- [x] Bootstrap APP nodes. Approve CSRs as needed..
|
1.0
|
OCP KLAB2 - Bootstrap KLAB2 cluster - **Describe the issue**
This issue tracks effort spent bootstrapping the KLAB2 cluster, starting with the dedicated BOOTSTRAP VM (can re-use the one from KLAB) until all of the nodes are bootstrapped into operation.
**Which Sprint Goal is this issue related to?**
**Additional context**
**Definition of done Checklist (where applicable)**
- [x] Bootstrap BOOTSTRAP VM
- [x] Bootstrap master nodes and confirm when BOOTSTRAP VM can be shut down.
- [x] Bootstrap infra nodes and confirm when openshift-install is complete. Approve CSRs as needed.
- [x] Bootstrap APP nodes. Approve CSRs as needed..
|
non_main
|
ocp bootstrap cluster describe the issue this issue tracks effort spent bootstrapping the cluster starting with the dedicated bootstrap vm can re use the one from klab until all of the nodes are bootstrapped into operation which sprint goal is this issue related to additional context definition of done checklist where applicable bootstrap bootstrap vm bootstrap master nodes and confirm when bootstrap vm can be shut down bootstrap infra nodes and confirm when openshift install is complete approve csrs as needed bootstrap app nodes approve csrs as needed
| 0
|
452,196
| 32,051,570,809
|
IssuesEvent
|
2023-09-23 16:12:08
|
GeorgeBaidooJr9/terraform-beginner-bootcamp-2023
|
https://api.github.com/repos/GeorgeBaidooJr9/terraform-beginner-bootcamp-2023
|
closed
|
Refactor Terraform CLI
|
bug documentation
|
There is an issue with installing Terraform CLI.
Need to to make sure automation is working
|
1.0
|
Refactor Terraform CLI - There is an issue with installing Terraform CLI.
Need to to make sure automation is working
|
non_main
|
refactor terraform cli there is an issue with installing terraform cli need to to make sure automation is working
| 0
|
4,900
| 25,180,574,246
|
IssuesEvent
|
2022-11-11 13:18:50
|
Pandora-IsoMemo/iso-app
|
https://api.github.com/repos/Pandora-IsoMemo/iso-app
|
opened
|
Provide code for HTML interface
|
Support: IT maintainance
|
@isomemo asked for the code for the html page:
> 3) Please send me the code for the html interface where we can select apps (https://isomemoapp.com/) so that our designers can update it. If this is truly only html code our designers can download it directly and send back the modified version. Please let me know!
@jan-abel-inwt could you check where the code can be found? @wahani mentioned it must lie somewhere on the shinyproxy server. If you have questions, please ask @wahani.
Let me know, when you found the code. Thanks! :slightly_smiling_face:
|
True
|
Provide code for HTML interface - @isomemo asked for the code for the html page:
> 3) Please send me the code for the html interface where we can select apps (https://isomemoapp.com/) so that our designers can update it. If this is truly only html code our designers can download it directly and send back the modified version. Please let me know!
@jan-abel-inwt could you check where the code can be found? @wahani mentioned it must lie somewhere on the shinyproxy server. If you have questions, please ask @wahani.
Let me know, when you found the code. Thanks! :slightly_smiling_face:
|
main
|
provide code for html interface isomemo asked for the code for the html page please send me the code for the html interface where we can select apps so that our designers can update it if this is truly only html code our designers can download it directly and send back the modified version please let me know jan abel inwt could you check where the code can be found wahani mentioned it must lie somewhere on the shinyproxy server if you have questions please ask wahani let me know when you found the code thanks slightly smiling face
| 1
|
723,236
| 24,890,354,848
|
IssuesEvent
|
2022-10-28 11:26:25
|
faebryk/faebryk
|
https://api.github.com/repos/faebryk/faebryk
|
closed
|
Release 2.0.0 <Insider>
|
⭐ goal: addition 🟥 priority: critical
|
### Feature Request
- [x] merge pytest pr #135
- [x] remove __init__ #136
- [x] test vco project https://github.com/ruben-iteng/eurorack-super_simple_oscillator/pull/1
- [x] test kicad airnode import
- [x] make pip package (done at release, just change version number) #142
- [x] check deterministic netlist naming
- [x] move source into src folder #140
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
1.0
|
Release 2.0.0 <Insider> - ### Feature Request
- [x] merge pytest pr #135
- [x] remove __init__ #136
- [x] test vco project https://github.com/ruben-iteng/eurorack-super_simple_oscillator/pull/1
- [x] test kicad airnode import
- [x] make pip package (done at release, just change version number) #142
- [x] check deterministic netlist naming
- [x] move source into src folder #140
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
non_main
|
release feature request merge pytest pr remove init test vco project test kicad airnode import make pip package done at release just change version number check deterministic netlist naming move source into src folder code of conduct i agree to follow this project s code of conduct
| 0
|
133,615
| 12,546,300,098
|
IssuesEvent
|
2020-06-05 20:27:17
|
ZupIT/charlescd
|
https://api.github.com/repos/ZupIT/charlescd
|
closed
|
Broken link in docs - section getting started
|
bug documentation
|
**Describe the bug**
Broken link in docs - section getting started
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://docs.charlescd.io/v/v0.2.1/primeiros-passos/instalando-charles
2. Click in "customizacão" ( https://docs.charlescd.io/primeiros-passos/instalando-charles#customizacao-total )
3. See error
**Expected behavior**
Show the page
**Screenshots**
**Your Environment**
- CharlesCD version used:
- Description of environment where CharlesCD is running:
- Browser Name and version (if applicable):
**Additional context**
Add any other context about the problem here.
|
1.0
|
Broken link in docs - section getting started - **Describe the bug**
Broken link in docs - section getting started
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://docs.charlescd.io/v/v0.2.1/primeiros-passos/instalando-charles
2. Click in "customizacão" ( https://docs.charlescd.io/primeiros-passos/instalando-charles#customizacao-total )
3. See error
**Expected behavior**
Show the page
**Screenshots**
**Your Environment**
- CharlesCD version used:
- Description of environment where CharlesCD is running:
- Browser Name and version (if applicable):
**Additional context**
Add any other context about the problem here.
|
non_main
|
broken link in docs section getting started describe the bug broken link in docs section getting started to reproduce steps to reproduce the behavior go to click in customizacão see error expected behavior show the page screenshots your environment charlescd version used description of environment where charlescd is running browser name and version if applicable additional context add any other context about the problem here
| 0
|
3,039
| 11,259,006,586
|
IssuesEvent
|
2020-01-13 07:01:55
|
microsoft/UVAtlas
|
https://api.github.com/repos/microsoft/UVAtlas
|
closed
|
Retire support for VS 2015
|
maintainence
|
In 2020, I plan to retire support for VS 2015. The following projects will be removed:
UVAtlas_2015
UVAtlas_2015_Win10
UVAtlas_Windows10_2015
UVAtlas_XboxOneXDK_2015
UVAtlasTool_2015
Please put any requests for continued support for one or more of these here.
|
True
|
Retire support for VS 2015 - In 2020, I plan to retire support for VS 2015. The following projects will be removed:
UVAtlas_2015
UVAtlas_2015_Win10
UVAtlas_Windows10_2015
UVAtlas_XboxOneXDK_2015
UVAtlasTool_2015
Please put any requests for continued support for one or more of these here.
|
main
|
retire support for vs in i plan to retire support for vs the following projects will be removed uvatlas uvatlas uvatlas uvatlas xboxonexdk uvatlastool please put any requests for continued support for one or more of these here
| 1
|
322,622
| 9,820,507,540
|
IssuesEvent
|
2019-06-14 02:57:17
|
plotly/dashR
|
https://api.github.com/repos/plotly/dashR
|
closed
|
Rename package to dash
|
high priority
|
**DashR repository** [#93](https://github.com/plotly/dashR/pull/93)
- [x] Edit `DESCRIPTION`
- [x] Edit `dashR-package.Rd`
- [x] Edit `README.md`
**dash repository** [#770](https://github.com/plotly/dash/pull/770)
- [x] Edit `_r_components_generation.py`
- [x] Edit `component_generator.py`
**dash-sample-apps repository**
- [ ] Edit all `app.R` scripts to use new package name
**dash-docs repository**
- [ ] Edit all R scripts to use new package name (in examples and code for docs chapters)
**dash-html-components repository** [#117](https://github.com/plotly/dash-html-components/pull/117)
- [x] Enforce dependency on `dash` rather than `dashR`, while updating to 0.16.0
**dash-core-components repository** [#566](https://github.com/plotly/dash-core-components/pull/566)
- [x] Enforce dependency on `dash` rather than `dashR`, while updating to 0.48.0
|
1.0
|
Rename package to dash - **DashR repository** [#93](https://github.com/plotly/dashR/pull/93)
- [x] Edit `DESCRIPTION`
- [x] Edit `dashR-package.Rd`
- [x] Edit `README.md`
**dash repository** [#770](https://github.com/plotly/dash/pull/770)
- [x] Edit `_r_components_generation.py`
- [x] Edit `component_generator.py`
**dash-sample-apps repository**
- [ ] Edit all `app.R` scripts to use new package name
**dash-docs repository**
- [ ] Edit all R scripts to use new package name (in examples and code for docs chapters)
**dash-html-components repository** [#117](https://github.com/plotly/dash-html-components/pull/117)
- [x] Enforce dependency on `dash` rather than `dashR`, while updating to 0.16.0
**dash-core-components repository** [#566](https://github.com/plotly/dash-core-components/pull/566)
- [x] Enforce dependency on `dash` rather than `dashR`, while updating to 0.48.0
|
non_main
|
rename package to dash dashr repository edit description edit dashr package rd edit readme md dash repository edit r components generation py edit component generator py dash sample apps repository edit all app r scripts to use new package name dash docs repository edit all r scripts to use new package name in examples and code for docs chapters dash html components repository enforce dependency on dash rather than dashr while updating to dash core components repository enforce dependency on dash rather than dashr while updating to
| 0
|
5,374
| 27,005,223,692
|
IssuesEvent
|
2023-02-10 11:05:04
|
cncf/glossary
|
https://api.github.com/repos/cncf/glossary
|
closed
|
i18n of "Tags" pages
|
maintainers
|
I found that currently some parts of "Tags" pages are not internationalized:
https://deploy-preview-1122--cncfglossary.netlify.app/ko/tags/애플리케이션/

https://deploy-preview-1122--cncfglossary.netlify.app/ko/tags/

and that thus those cannot be localized with [some simple modifications](https://github.com/cncf/glossary/pull/1232):

---

So I suggest to achieve i18n of "Tags" pages by:
- adding some strings in `en.toml` (then each lang team can add the strings in their lang on `xx.toml` as well)
- modifying `taxonomy.html` (currently, our one is from [`docsy/layouts/_default/taxonomy.html`](https://github.com/google/docsy/blob/main/layouts/_default/taxonomy.html))
|
True
|
i18n of "Tags" pages - I found that currently some parts of "Tags" pages are not internationalized:
https://deploy-preview-1122--cncfglossary.netlify.app/ko/tags/애플리케이션/

https://deploy-preview-1122--cncfglossary.netlify.app/ko/tags/

and that thus those cannot be localized with [some simple modifications](https://github.com/cncf/glossary/pull/1232):

---

So I suggest to achieve i18n of "Tags" pages by:
- adding some strings in `en.toml` (then each lang team can add the strings in their lang on `xx.toml` as well)
- modifying `taxonomy.html` (currently, our one is from [`docsy/layouts/_default/taxonomy.html`](https://github.com/google/docsy/blob/main/layouts/_default/taxonomy.html))
|
main
|
of tags pages i found that currently some parts of tags pages are not internationalized and that thus those cannot be localized with so i suggest to achieve of tags pages by adding some strings in en toml then each lang team can add the strings in their lang on xx toml as well modifying taxonomy html currently our one is from
| 1
|
3,951
| 17,910,426,722
|
IssuesEvent
|
2021-09-09 03:59:59
|
microsoft/DirectXMath
|
https://api.github.com/repos/microsoft/DirectXMath
|
closed
|
Clean up the plethora of _M_ARM platform defines
|
maintainence
|
The library now contains a number of ARM64 variants, and the preprocessor defines used in DirectXMath.h, DirectXMathMatrix.inl, DirectXMathVector.inl, and DirectXPackedVector.inl could use some cleanup to simplify things.
|
True
|
Clean up the plethora of _M_ARM platform defines - The library now contains a number of ARM64 variants, and the preprocessor defines used in DirectXMath.h, DirectXMathMatrix.inl, DirectXMathVector.inl, and DirectXPackedVector.inl could use some cleanup to simplify things.
|
main
|
clean up the plethora of m arm platform defines the library now contains a number of variants and the preprocessor defines used in directxmath h directxmathmatrix inl directxmathvector inl and directxpackedvector inl could use some cleanup to simplify things
| 1
|
4,087
| 19,297,802,550
|
IssuesEvent
|
2021-12-12 21:39:28
|
amyjko/faculty
|
https://api.github.com/repos/amyjko/faculty
|
closed
|
Merge redundant books data
|
maintainability
|
There's a `books` field in the profile and also `book` types in the publication list. They're redundant. Just leave everything in the publication list and filter it for the books page.
|
True
|
Merge redundant books data - There's a `books` field in the profile and also `book` types in the publication list. They're redundant. Just leave everything in the publication list and filter it for the books page.
|
main
|
merge redundant books data there s a books field in the profile and also book types in the publication list they re redundant just leave everything in the publication list and filter it for the books page
| 1
|
350,368
| 24,980,526,603
|
IssuesEvent
|
2022-11-02 11:18:49
|
Mischback/mailsrv
|
https://api.github.com/repos/Mischback/mailsrv
|
opened
|
``Sphinx`` / ``readthedocs``-compatible Documentation
|
documentation enhancement
|
Provide a setup for ``sphinx``, including a general structure for the documentation.
For the setup, refer to https://github.com/Mischback/django-calingen
|
1.0
|
``Sphinx`` / ``readthedocs``-compatible Documentation - Provide a setup for ``sphinx``, including a general structure for the documentation.
For the setup, refer to https://github.com/Mischback/django-calingen
|
non_main
|
sphinx readthedocs compatible documentation provide a setup for sphinx including a general structure for the documentation for the setup refer to
| 0
|
18,009
| 24,025,354,905
|
IssuesEvent
|
2022-09-15 11:02:19
|
COS301-SE-2022/Pure-LoRa-Tracking
|
https://api.github.com/repos/COS301-SE-2022/Pure-LoRa-Tracking
|
closed
|
(processing): message queue CRON service
|
(system) Server (bus) processing
|
Check message queue for data and store in db
Consider the case of what time the data came in.
It may be required to be matched with previous data to complete a row in the database
|
1.0
|
(processing): message queue CRON service - Check message queue for data and store in db
Consider the case of what time the data came in.
It may be required to be matched with previous data to complete a row in the database
|
non_main
|
processing message queue cron service check message queue for data and store in db consider the case of what time the data came in it may be required to be matched with previous data to complete a row in the database
| 0
|
285,704
| 24,690,650,549
|
IssuesEvent
|
2022-10-19 08:17:46
|
turbot/steampipe
|
https://api.github.com/repos/turbot/steampipe
|
opened
|
Single job with tests for acceptance and release workflows
|
test
|
Currently, even though we have a single set of test files, they are run as jobs described from the acceptance test and the release workflow.
This may lead to fragmentation on how the tests are run in the two.
Ideally, there should be one workflow which runs just the tests and which is triggered by the acceptance and release workflows.
|
1.0
|
Single job with tests for acceptance and release workflows - Currently, even though we have a single set of test files, they are run as jobs described from the acceptance test and the release workflow.
This may lead to fragmentation on how the tests are run in the two.
Ideally, there should be one workflow which runs just the tests and which is triggered by the acceptance and release workflows.
|
non_main
|
single job with tests for acceptance and release workflows currently even though we have a single set of test files they are run as jobs described from the acceptance test and the release workflow this may lead to fragmentation on how the tests are run in the two ideally there should be one workflow which runs just the tests and which is triggered by the acceptance and release workflows
| 0
|
4,706
| 24,270,828,065
|
IssuesEvent
|
2022-09-28 10:07:23
|
mozilla/foundation.mozilla.org
|
https://api.github.com/repos/mozilla/foundation.mozilla.org
|
closed
|
SEO | Pages returned 4XX status code
|
engineering Maintain
|
This is a set of pages that have broken links published on them. Both the link and page are indicated in the inventory. In most cases the broken links appear in the HREFLANG list.
https://docs.google.com/spreadsheets/d/15HwgpxSYc4Zl809kcebAhLfLYXFuIk8ZP-Qvk3yVV8Q/edit#gid=1124145199
|
True
|
SEO | Pages returned 4XX status code - This is a set of pages that have broken links published on them. Both the link and page are indicated in the inventory. In most cases the broken links appear in the HREFLANG list.
https://docs.google.com/spreadsheets/d/15HwgpxSYc4Zl809kcebAhLfLYXFuIk8ZP-Qvk3yVV8Q/edit#gid=1124145199
|
main
|
seo pages returned status code this is a set of pages that have broken links published on them both the link and page are indicated in the inventory in most cases the broken links appear in the hreflang list
| 1
|
642,457
| 20,888,133,437
|
IssuesEvent
|
2022-03-23 08:13:26
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
mall.industry.siemens.com - site is not usable
|
browser-firefox priority-normal engine-gecko
|
<!-- @browser: Firefox 100.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:100.0) Gecko/20100101 Firefox/100.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/101349 -->
**URL**: https://mall.industry.siemens.com/spice/portal/portal?SESSIONID=f0y14oxucngg3rtgl1yd2fn5
**Browser / Version**: Firefox 100.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
When initiating an spice application I get "something went wrong" error
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/3/e51bdd17-f318-4a4a-b047-87b871eccb44.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220321065848</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/3/dd809283-04a0-40d2-a7d0-bf03c4af9ef1)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
mall.industry.siemens.com - site is not usable - <!-- @browser: Firefox 100.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:100.0) Gecko/20100101 Firefox/100.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/101349 -->
**URL**: https://mall.industry.siemens.com/spice/portal/portal?SESSIONID=f0y14oxucngg3rtgl1yd2fn5
**Browser / Version**: Firefox 100.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
When initiating an spice application I get "something went wrong" error
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/3/e51bdd17-f318-4a4a-b047-87b871eccb44.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220321065848</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/3/dd809283-04a0-40d2-a7d0-bf03c4af9ef1)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_main
|
mall industry siemens com site is not usable url browser version firefox operating system windows tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce when initiating an spice application i get something went wrong error view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
396,667
| 27,129,580,554
|
IssuesEvent
|
2023-02-16 08:49:42
|
thedevsouvik/shop
|
https://api.github.com/repos/thedevsouvik/shop
|
closed
|
Reduce client bundle with server component
|
documentation enhancement question
|
Reduce client side bundle by moving Client Components to the leaves of your component tree where possible recommended way mentions at NextJs [docs](https://beta.nextjs.org/docs/rendering/server-and-client-components#moving-client-components-to-the-leaves)
|
1.0
|
Reduce client bundle with server component - Reduce client side bundle by moving Client Components to the leaves of your component tree where possible recommended way mentions at NextJs [docs](https://beta.nextjs.org/docs/rendering/server-and-client-components#moving-client-components-to-the-leaves)
|
non_main
|
reduce client bundle with server component reduce client side bundle by moving client components to the leaves of your component tree where possible recommended way mentions at nextjs
| 0
|
95,210
| 10,868,603,468
|
IssuesEvent
|
2019-11-15 04:27:39
|
evelynejuliet/pe
|
https://api.github.com/repos/evelynejuliet/pe
|
opened
|
User guide quick start is still addressbook.jar
|
severity.Medium type.DocumentationBug
|

It is not updated to the current jar yet.
|
1.0
|
User guide quick start is still addressbook.jar - 
It is not updated to the current jar yet.
|
non_main
|
user guide quick start is still addressbook jar it is not updated to the current jar yet
| 0
|
61,018
| 6,721,304,768
|
IssuesEvent
|
2017-10-16 11:09:54
|
SatelliteQE/robottelo
|
https://api.github.com/repos/SatelliteQE/robottelo
|
opened
|
ContentViewTestCase.test_positive_remove_cv_version_from_multi_env_capsule_scenario - timeout openning channel
|
6.3 test-failure
|
There is a ssh timeout on `vm.suspend`. I think we might need to tweak the timeout in the test, since our CR is getting slower
- if you stumble upon more tests with the same issue, pls add them here and change the title to turn this issue into a tracker.
thx
|
1.0
|
ContentViewTestCase.test_positive_remove_cv_version_from_multi_env_capsule_scenario - timeout openning channel - There is a ssh timeout on `vm.suspend`. I think we might need to tweak the timeout in the test, since our CR is getting slower
- if you stumble upon more tests with the same issue, pls add them here and change the title to turn this issue into a tracker.
thx
|
non_main
|
contentviewtestcase test positive remove cv version from multi env capsule scenario timeout openning channel there is a ssh timeout on vm suspend i think we might need to tweak the timeout in the test since our cr is getting slower if you stumble upon more tests with the same issue pls add them here and change the title to turn this issue into a tracker thx
| 0
|
61,608
| 17,023,737,787
|
IssuesEvent
|
2021-07-03 03:34:35
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Search bug of housenumbers like 43/2
|
Component: nominatim Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 1.22pm, Wednesday, 3rd August 2011]**
Housenumbers like 43/2 is very common in Ukraine - so please update search.php
$sHouseNumberRegex = '\\\\m'.str_replace(' ','[-, ]',$aSearch['sHouseNumber']).'\\\\M';
to
$sHouseNumberRegex = '\\\\m'.str_replace(' ','[-, /]',$aSearch['sHouseNumber']).'\\\\M';
|
1.0
|
Search bug of housenumbers like 43/2 - **[Submitted to the original trac issue database at 1.22pm, Wednesday, 3rd August 2011]**
Housenumbers like 43/2 is very common in Ukraine - so please update search.php
$sHouseNumberRegex = '\\\\m'.str_replace(' ','[-, ]',$aSearch['sHouseNumber']).'\\\\M';
to
$sHouseNumberRegex = '\\\\m'.str_replace(' ','[-, /]',$aSearch['sHouseNumber']).'\\\\M';
|
non_main
|
search bug of housenumbers like housenumbers like is very common in ukraine so please update search php shousenumberregex m str replace asearch m to shousenumberregex m str replace asearch m
| 0
|
1,102
| 4,972,332,279
|
IssuesEvent
|
2016-12-05 21:16:01
|
ansible/ansible-modules-extras
|
https://api.github.com/repos/ansible/ansible-modules-extras
|
closed
|
Pagerduty module incorrectly identifies errors on maintenance window creation
|
affects_2.1 bug_report waiting_on_maintainer
|
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ansible-modules-extras/monitoring/pagerduty.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
RHEL6 kernel 2.6.32-573.26.1.el6.x86_64
##### SUMMARY
<!--- Explain the problem briefly -->
Per the pagerduty API listed here: https://developer.pagerduty.com/documentation/rest/maintenance_windows/create
The api should return a HTTP 201 created if it successfully creates a maintenance window. The module looks for HTTP 200 and considers all others error status
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: localhost
connection: local
sudo: no
tasks:
- name: Turn off pagerduty
pagerduty: name={{ pagerduty_service_name }}
token={{ pagerduty_token }}
requester_id={{ requester_id }}
hours=4
desc="Maintenance window for ECOM Build"
state=running
service={{ pagerduty_service }}
register: pd_window
tags: create
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Maintenance window created in pagerduty and success/changed status in ansible
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
Pagerduty window successfully created, however ansible considered the return status an error:
<!--- Paste verbatim command output between quotes below -->
```
TASK [Turn off pagerduty] ******************************************************
task path: /home/ansible/Ansible/playbooks/builds/ecom/test-pagerduty.yml:6
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1466022753.18-127112428727096 `" && echo ansible-tmp-1466022753.18-127112428727096="` echo $HOME/.ansible/tmp/ansible-tmp-1466022753.18-127112428727096 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpPFHdP5 TO /home/ansible/.ansible/tmp/ansible-tmp-1466022753.18-127112428727096/pagerduty
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1466022753.18-127112428727096/pagerduty; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1466022753.18-127112428727096/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"desc": "Maintenance window for ECOM Build", "hours": "4", "minutes": "0", "name": "### Removed ###", "passwd": null, "requester_id": "### REMOVED ###", "service": ["### REMOVED ###"], "state": "running", "token": "### REMOVED ####", "user": null, "validate_certs": true}, "module_name": "pagerduty"}, "msg": "failed to create the window: OK (unknown bytes)"}
```
|
True
|
Pagerduty module incorrectly identifies errors on maintenance window creation - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ansible-modules-extras/monitoring/pagerduty.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
RHEL6 kernel 2.6.32-573.26.1.el6.x86_64
##### SUMMARY
<!--- Explain the problem briefly -->
Per the pagerduty API listed here: https://developer.pagerduty.com/documentation/rest/maintenance_windows/create
The api should return a HTTP 201 created if it successfully creates a maintenance window. The module looks for HTTP 200 and considers all others error status
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- hosts: localhost
connection: local
sudo: no
tasks:
- name: Turn off pagerduty
pagerduty: name={{ pagerduty_service_name }}
token={{ pagerduty_token }}
requester_id={{ requester_id }}
hours=4
desc="Maintenance window for ECOM Build"
state=running
service={{ pagerduty_service }}
register: pd_window
tags: create
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
Maintenance window created in pagerduty and success/changed status in ansible
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
Pagerduty window successfully created, however ansible considered the return status an error:
<!--- Paste verbatim command output between quotes below -->
```
TASK [Turn off pagerduty] ******************************************************
task path: /home/ansible/Ansible/playbooks/builds/ecom/test-pagerduty.yml:6
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1466022753.18-127112428727096 `" && echo ansible-tmp-1466022753.18-127112428727096="` echo $HOME/.ansible/tmp/ansible-tmp-1466022753.18-127112428727096 `" ) && sleep 0'
<127.0.0.1> PUT /tmp/tmpPFHdP5 TO /home/ansible/.ansible/tmp/ansible-tmp-1466022753.18-127112428727096/pagerduty
<127.0.0.1> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1466022753.18-127112428727096/pagerduty; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1466022753.18-127112428727096/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"desc": "Maintenance window for ECOM Build", "hours": "4", "minutes": "0", "name": "### Removed ###", "passwd": null, "requester_id": "### REMOVED ###", "service": ["### REMOVED ###"], "state": "running", "token": "### REMOVED ####", "user": null, "validate_certs": true}, "module_name": "pagerduty"}, "msg": "failed to create the window: OK (unknown bytes)"}
```
|
main
|
pagerduty module incorrectly identifies errors on maintenance window creation issue type bug report component name ansible modules extras monitoring pagerduty py ansible version ansible config file etc ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific kernel summary per the pagerduty api listed here the api should return a http created if it successfully creates a maintenance window the module looks for http and considers all others error status steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used hosts localhost connection local sudo no tasks name turn off pagerduty pagerduty name pagerduty service name token pagerduty token requester id requester id hours desc maintenance window for ecom build state running service pagerduty service register pd window tags create expected results maintenance window created in pagerduty and success changed status in ansible actual results pagerduty window successfully created however ansible considered the return status an error task task path home ansible ansible playbooks builds ecom test pagerduty yml establish local connection for user ansible exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home ansible ansible tmp ansible tmp pagerduty exec bin sh c lang en us utf lc all en us utf lc messages en us utf usr bin python home ansible ansible tmp ansible tmp pagerduty rm rf home ansible ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module args desc maintenance window for ecom build hours minutes name removed passwd null requester id removed service state running token removed user null validate certs true module name pagerduty msg failed to create the window ok unknown bytes
| 1
|
535
| 3,932,450,788
|
IssuesEvent
|
2016-04-25 15:46:45
|
tgstation/-tg-station
|
https://api.github.com/repos/tgstation/-tg-station
|
closed
|
Ian clothing is getting out of hand (codewise)
|
Maintainability - Hinders improvements
|
We need to replace the HUGE switch()s with vars like ```dog_name```,```dog_desc```,```dog_phrases```,```dog_icon``` and ```dog_icon_state``` on the ```/obj/item/clothing``` type
@coiax has volunteered to do this.
|
True
|
Ian clothing is getting out of hand (codewise) - We need to replace the HUGE switch()s with vars like ```dog_name```,```dog_desc```,```dog_phrases```,```dog_icon``` and ```dog_icon_state``` on the ```/obj/item/clothing``` type
@coiax has volunteered to do this.
|
main
|
ian clothing is getting out of hand codewise we need to replace the huge switch s with vars like dog name dog desc dog phrases dog icon and dog icon state on the obj item clothing type coiax has volunteered to do this
| 1
|
1,991
| 6,694,292,149
|
IssuesEvent
|
2017-10-10 00:56:28
|
duckduckgo/zeroclickinfo-spice
|
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
|
closed
|
Airlines: Doesn't trigger for all relevant queries
|
Bug External Maintainer Input Requested Status: Tolerated Triggering
|
<!-- Please use the appropriate issue title format:
BUG FIX
{IA Name} Bug: {Short description of bug}
SUGGESTION
{IA Name} Suggestion: {Short description of suggestion}
OTHER
{IA Name}: {Short description} -->
### Description
<!-- Describe the bug or suggestion in detail -->
The mentioned IA gets triggered for query [`ai 502`](https://duckduckgo.com/?q=ai+502&atb=v73-4_q&ia=flights). However, it doesn't get triggered for either of [`ai 137`](https://duckduckgo.com/?q=ai+137&atb=v73-4_q&ia=web) and [`ai 138`](https://duckduckgo.com/?q=ai+138&atb=v73-4_q&ia=web) queries.
## People to notify
<!-- Please @mention any relevant people/organizations here:-->
@amoraleda
<!-- LANGUAGE LEADERS ONLY: REMOVE THIS LINE
## Get Started
- [ ] 1) Claim this issue by commenting below
- [ ] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-spice/blob/master/CONTRIBUTING.md)
- [ ] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [ ] 4) Create a Pull Request
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
<!-- DO NOT REMOVE -->
---
<!-- The Instant Answer ID can be found by clicking the `?` icon beside the Instant Answer result on DuckDuckGo.com -->
Instant Answer Page: https://duck.co/ia/view/airlines
<!-- FILL THIS IN: ^^^^ -->
|
True
|
Airlines: Doesn't trigger for all relevant queries - <!-- Please use the appropriate issue title format:
BUG FIX
{IA Name} Bug: {Short description of bug}
SUGGESTION
{IA Name} Suggestion: {Short description of suggestion}
OTHER
{IA Name}: {Short description} -->
### Description
<!-- Describe the bug or suggestion in detail -->
The mentioned IA gets triggered for query [`ai 502`](https://duckduckgo.com/?q=ai+502&atb=v73-4_q&ia=flights). However, it doesn't get triggered for either of [`ai 137`](https://duckduckgo.com/?q=ai+137&atb=v73-4_q&ia=web) and [`ai 138`](https://duckduckgo.com/?q=ai+138&atb=v73-4_q&ia=web) queries.
## People to notify
<!-- Please @mention any relevant people/organizations here:-->
@amoraleda
<!-- LANGUAGE LEADERS ONLY: REMOVE THIS LINE
## Get Started
- [ ] 1) Claim this issue by commenting below
- [ ] 2) Review our [Contributing Guide](https://github.com/duckduckgo/zeroclickinfo-spice/blob/master/CONTRIBUTING.md)
- [ ] 3) [Set up your development environment](https://docs.duckduckhack.com/welcome/setup-dev-environment.html), and fork this repository
- [ ] 4) Create a Pull Request
## Resources
- Join [DuckDuckHack Slack](https://quackslack.herokuapp.com/) to ask questions
- Join the [DuckDuckHack Forum](https://forum.duckduckhack.com/) to discuss project planning and Instant Answer metrics
- Read the [DuckDuckHack Documentation](https://docs.duckduckhack.com/) for technical help
<!-- DO NOT REMOVE -->
---
<!-- The Instant Answer ID can be found by clicking the `?` icon beside the Instant Answer result on DuckDuckGo.com -->
Instant Answer Page: https://duck.co/ia/view/airlines
<!-- FILL THIS IN: ^^^^ -->
|
main
|
airlines doesn t trigger for all relevant queries please use the appropriate issue title format bug fix ia name bug short description of bug suggestion ia name suggestion short description of suggestion other ia name short description description the mentioned ia gets triggered for query however it doesn t get triggered for either of and queries people to notify amoraleda language leaders only remove this line get started claim this issue by commenting below review our and fork this repository create a pull request resources join to ask questions join the to discuss project planning and instant answer metrics read the for technical help instant answer page
| 1
|
92,042
| 8,337,534,228
|
IssuesEvent
|
2018-09-28 11:29:02
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
closed
|
test results file not present
|
tests
|
## Description
When running unit and browser tests there is no file containing results that can be parsed by the CI system.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. execute unit tests
## Actual result:
no file with test results gets created
## Expected result:
test output present as XML file in JUnit results format
## Reproduces how often:
every time
## Brave version (chrome://version info)
n/a
### Reproducible on current release:
yes
### Website problems only:
### Additional Information
|
1.0
|
test results file not present - ## Description
When running unit and browser tests there is no file containing results that can be parsed by the CI system.
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1. execute unit tests
## Actual result:
no file with test results gets created
## Expected result:
test output present as XML file in JUnit results format
## Reproduces how often:
every time
## Brave version (chrome://version info)
n/a
### Reproducible on current release:
yes
### Website problems only:
### Additional Information
|
non_main
|
test results file not present description when running unit and browser tests there is no file containing results that can be parsed by the ci system steps to reproduce execute unit tests actual result no file with test results gets created expected result test output present as xml file in junit results format reproduces how often every time brave version chrome version info n a reproducible on current release yes website problems only additional information
| 0
|
3,878
| 17,190,195,281
|
IssuesEvent
|
2021-07-16 09:49:03
|
chocolatey-community/chocolatey-package-requests
|
https://api.github.com/repos/chocolatey-community/chocolatey-package-requests
|
closed
|
RFP - torifier
|
Status: Available For Maintainer(s) Status: Published
|
<!--
* Please ensure the package does not already exist in the Chocolatey Community Repository - https://chocolatey.org/packages - by using a relevant search.
* Please ensure there is no existing open package request.
* Please ensure the issue title starts with 'RFP - ' - for example 'RFP - adobe-reader'
* Please also ensure the issue title matches the identifier you expect the package should be named.
* Please ensure you have both the Software Project URL and the Software Download URL before continuing.
NOTE: Keep in mind we have an etiquette regarding communication that we expect folks to observe when they are looking for support in the Chocolatey community - https://github.com/chocolatey/chocolatey-package-requests/blob/master/README.md#etiquette-regarding-communication
PLEASE REMOVE ALL COMMENTS ONCE YOU HAVE READ THEM.
-->
## Checklist
- [x] The package I am requesting does not already exist on https://chocolatey.org/packages;
- [x] There is no open issue for this package;
- [x] The issue title starts with 'RFP - ';
- [x] The download URL is public and not locked behind a paywall / login;
## Package Details
Software project URL : https://www.torifier.com/
Direct download URL for the software / installer : https://cutt.ly/ebfiiPz
Software summary / short description: tunnel software applications through Tor without the need to reconfigure them
<!-- ## Package Expectations
Here you can make suggestions on what you would expect the package to do outside of 'installing' - eg. adding icons to the desktop
-->
|
True
|
RFP - torifier - <!--
* Please ensure the package does not already exist in the Chocolatey Community Repository - https://chocolatey.org/packages - by using a relevant search.
* Please ensure there is no existing open package request.
* Please ensure the issue title starts with 'RFP - ' - for example 'RFP - adobe-reader'
* Please also ensure the issue title matches the identifier you expect the package should be named.
* Please ensure you have both the Software Project URL and the Software Download URL before continuing.
NOTE: Keep in mind we have an etiquette regarding communication that we expect folks to observe when they are looking for support in the Chocolatey community - https://github.com/chocolatey/chocolatey-package-requests/blob/master/README.md#etiquette-regarding-communication
PLEASE REMOVE ALL COMMENTS ONCE YOU HAVE READ THEM.
-->
## Checklist
- [x] The package I am requesting does not already exist on https://chocolatey.org/packages;
- [x] There is no open issue for this package;
- [x] The issue title starts with 'RFP - ';
- [x] The download URL is public and not locked behind a paywall / login;
## Package Details
Software project URL : https://www.torifier.com/
Direct download URL for the software / installer : https://cutt.ly/ebfiiPz
Software summary / short description: tunnel software applications through Tor without the need to reconfigure them
<!-- ## Package Expectations
Here you can make suggestions on what you would expect the package to do outside of 'installing' - eg. adding icons to the desktop
-->
|
main
|
rfp torifier please ensure the package does not already exist in the chocolatey community repository by using a relevant search please ensure there is no existing open package request please ensure the issue title starts with rfp for example rfp adobe reader please also ensure the issue title matches the identifier you expect the package should be named please ensure you have both the software project url and the software download url before continuing note keep in mind we have an etiquette regarding communication that we expect folks to observe when they are looking for support in the chocolatey community please remove all comments once you have read them checklist the package i am requesting does not already exist on there is no open issue for this package the issue title starts with rfp the download url is public and not locked behind a paywall login package details software project url direct download url for the software installer software summary short description tunnel software applications through tor without the need to reconfigure them package expectations here you can make suggestions on what you would expect the package to do outside of installing eg adding icons to the desktop
| 1
|
460,378
| 13,208,846,177
|
IssuesEvent
|
2020-08-15 07:36:50
|
geolonia/docs.geolonia.com
|
https://api.github.com/repos/geolonia/docs.geolonia.com
|
opened
|
既存のスタイルについて想定しているユースケースなどに対する説明を追加する
|
Priority: Middle enhancement
|
<ul>
<li><a href="https://github.com/geolonia/basic">geolonia/basic</a></li>
<li><a href="https://github.com/geolonia/midnight">geolonia/midnight</a></li>
<li><a href="https://github.com/geolonia/red-planet">geolonia/red-planet</a></li>
<li><a href="https://github.com/geolonia/notebook">geolonia/notebook</a></li>
</ul>
|
1.0
|
既存のスタイルについて想定しているユースケースなどに対する説明を追加する - <ul>
<li><a href="https://github.com/geolonia/basic">geolonia/basic</a></li>
<li><a href="https://github.com/geolonia/midnight">geolonia/midnight</a></li>
<li><a href="https://github.com/geolonia/red-planet">geolonia/red-planet</a></li>
<li><a href="https://github.com/geolonia/notebook">geolonia/notebook</a></li>
</ul>
|
non_main
|
既存のスタイルについて想定しているユースケースなどに対する説明を追加する a href a href a href a href
| 0
|
62,189
| 12,198,363,545
|
IssuesEvent
|
2020-04-29 22:41:23
|
kwk/test-llvm-bz-import-5
|
https://api.github.com/repos/kwk/test-llvm-bz-import-5
|
closed
|
Clang emits loads of an alloca wider than the alloca for ARM
|
BZ-BUG-STATUS: RESOLVED BZ-RESOLUTION: FIXED clang/LLVM Codegen dummy import from bugzilla
|
This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=14048.
|
1.0
|
Clang emits loads of an alloca wider than the alloca for ARM - This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=14048.
|
non_main
|
clang emits loads of an alloca wider than the alloca for arm this issue was imported from bugzilla
| 0
|
2,560
| 8,708,543,962
|
IssuesEvent
|
2018-12-06 11:13:20
|
MarcusWolschon/osmeditor4android
|
https://api.github.com/repos/MarcusWolschon/osmeditor4android
|
opened
|
PropertyEditor refactoring
|
Maintainability Performance Task
|
- [ ] simplify instantiation: don't pass tags and relation id etc, as we have access to in-memory instances now contrary to when it was original conceived
- [ ] separate activity and fragment so that the fragment can be used standalone
|
True
|
PropertyEditor refactoring - - [ ] simplify instantiation: don't pass tags and relation id etc, as we have access to in-memory instances now contrary to when it was original conceived
- [ ] separate activity and fragment so that the fragment can be used standalone
|
main
|
propertyeditor refactoring simplify instantiation don t pass tags and relation id etc as we have access to in memory instances now contrary to when it was original conceived separate activity and fragment so that the fragment can be used standalone
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.