Unnamed: 0 int64 1 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 3 438 | labels stringlengths 4 308 | body stringlengths 7 254k | index stringclasses 7 values | text_combine stringlengths 96 254k | label stringclasses 2 values | text stringlengths 96 246k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,092 | 11,741,740,317 | IssuesEvent | 2020-03-11 22:32:02 | alacritty/alacritty | https://api.github.com/repos/alacritty/alacritty | closed | Mouse cursor rendered in low-res on a HiDPI Wayland output | A - deps B - bug C - waiting on maintainer DS - Wayland H - linux S - winit/glutin | > Which operating system does the issue occur on?
Linux
> If on linux, are you using X11 or Wayland?
Wayland, Sway git master roughly equivalent to 1.1rc2.
On a HiDPI output, with scaling set to 2, the mouse cursor set by Alacritty is a low-res one. I.e. it is probably rendered to a surface that has **not** had its buffer scale set. This means the compositor (Sway in this case) will scale it.
See the difference below. When the cursor is hovering over the title bar, it is rendered by Sway itself, in high-res. When it's hovering over the window content, it is rendered in low-res.


I realize it's likely this isn't a bug in Alacritty per sé, but in one of its (wayland) dependencies. | True | Mouse cursor rendered in low-res on a HiDPI Wayland output - > Which operating system does the issue occur on?
Linux
> If on linux, are you using X11 or Wayland?
Wayland, Sway git master roughly equivalent to 1.1rc2.
On a HiDPI output, with scaling set to 2, the mouse cursor set by Alacritty is a low-res one. I.e. it is probably rendered to a surface that has **not** had its buffer scale set. This means the compositor (Sway in this case) will scale it.
See the difference below. When the cursor is hovering over the title bar, it is rendered by Sway itself, in high-res. When it's hovering over the window content, it is rendered in low-res.


I realize it's likely this isn't a bug in Alacritty per sé, but in one of its (wayland) dependencies. | main | mouse cursor rendered in low res on a hidpi wayland output which operating system does the issue occur on linux if on linux are you using or wayland wayland sway git master roughly equivalent to on a hidpi output with scaling set to the mouse cursor set by alacritty is a low res one i e it is probably rendered to a surface that has not had its buffer scale set this means the compositor sway in this case will scale it see the difference below when the cursor is hovering over the title bar it is rendered by sway itself in high res when it s hovering over the window content it is rendered in low res i realize it s likely this isn t a bug in alacritty per sé but in one of its wayland dependencies | 1 |
5,488 | 27,401,720,167 | IssuesEvent | 2023-03-01 01:31:51 | aws/serverless-application-model | https://api.github.com/repos/aws/serverless-application-model | closed | Failed to publish to SAR with Error: ResultPath being null | type/bug area/step-function area/sar maintainer/need-followup | **Description:** If you deploy a `AWS::Serverless::StateMachine` with the AWS SAM CLI it works great, but you cannot publish this app/stack to SAR.
**Steps to reproduce the issue:**
1. Define a simple app with a `AWS::Serverless::StateMachine` in it
2. Run `sam package` and `sam publish`
**Observed result:**
App publication fails with this error:
> Error: SAM template is invalid. It cannot be deployed using AWS CloudFormation due to the following validation error: /Resources/powerTuningStateMachine/Type/Definition/States/Cleaner/ResultPath 'null' values are not allowed in templates
**Expected result:**
I'd expect to app to be published successfully, or at least I'd expect SAM CLI to provide a meaningful error.
| True | Failed to publish to SAR with Error: ResultPath being null - **Description:** If you deploy a `AWS::Serverless::StateMachine` with the AWS SAM CLI it works great, but you cannot publish this app/stack to SAR.
**Steps to reproduce the issue:**
1. Define a simple app with a `AWS::Serverless::StateMachine` in it
2. Run `sam package` and `sam publish`
**Observed result:**
App publication fails with this error:
> Error: SAM template is invalid. It cannot be deployed using AWS CloudFormation due to the following validation error: /Resources/powerTuningStateMachine/Type/Definition/States/Cleaner/ResultPath 'null' values are not allowed in templates
**Expected result:**
I'd expect to app to be published successfully, or at least I'd expect SAM CLI to provide a meaningful error.
| main | failed to publish to sar with error resultpath being null description if you deploy a aws serverless statemachine with the aws sam cli it works great but you cannot publish this app stack to sar steps to reproduce the issue define a simple app with a aws serverless statemachine in it run sam package and sam publish observed result app publication fails with this error error sam template is invalid it cannot be deployed using aws cloudformation due to the following validation error resources powertuningstatemachine type definition states cleaner resultpath null values are not allowed in templates expected result i d expect to app to be published successfully or at least i d expect sam cli to provide a meaningful error | 1 |
5,232 | 26,534,839,415 | IssuesEvent | 2023-01-19 14:58:55 | mozilla/foundation.mozilla.org | https://api.github.com/repos/mozilla/foundation.mozilla.org | closed | Upgrade `wagtail-inventory` to 1.6 | engineering maintain | ## Description
To unblock the upgrade of Wagtail to version 3.0 we need to upgrade `wagtail-inventory` to version 1.6.
See also: https://github.com/cfpb/wagtail-inventory/releases/tag/1.6
## Acceptance criteria
- [x] `wagtail-inventory` is upgraded to version 1.6 | True | Upgrade `wagtail-inventory` to 1.6 - ## Description
To unblock the upgrade of Wagtail to version 3.0 we need to upgrade `wagtail-inventory` to version 1.6.
See also: https://github.com/cfpb/wagtail-inventory/releases/tag/1.6
## Acceptance criteria
- [x] `wagtail-inventory` is upgraded to version 1.6 | main | upgrade wagtail inventory to description to unblock the upgrade of wagtail to version we need to upgrade wagtail inventory to version see also acceptance criteria wagtail inventory is upgraded to version | 1 |
78,413 | 22,264,383,484 | IssuesEvent | 2022-06-10 05:45:17 | foundry-rs/foundry | https://api.github.com/repos/foundry-rs/foundry | closed | Feature: SMTChecker support (a.k.a support all possible outputs) | T-feature C-forge P-normal Cmd-forge-build | ### Component
Forge
### Describe the feature you would like
As of 0.8.4, solidity is depreciating the experimental pragma for SMT Checker as it runs on all files if enabled
```solidity
pragma experimental SMTChecker;
```
The new way to configure this is through the JSON config file
```json
"settings.modelChecker.targets": ["underflow", "overflow"]
```
You have to define which _engine_ it should run as well, [see https://docs.soliditylang.org/en/v0.8.11/smtchecker.html#smtchecker-engines](https://docs.soliditylang.org/en/v0.8.11/smtchecker.html#smtchecker-engines)
```json
"settings.modelChecker.solvers": ["smtlib2","z3"]
```
```
settings.modelChecker.targets=<targets>
```
And what targets to check
```
: --model-checker-targets assert,overflow
```
These options are not configurable currently for forge
Additional benefits would be when using the yul optimizer you would have access to the `ReasoningBasedSimplifier`
### Additional context
Additionally custom natspec parsing would help in tests
```js
/// @custom:smtchecker
```
maybe can be expanded to slither other tools, etc | 1.0 | Feature: SMTChecker support (a.k.a support all possible outputs) - ### Component
Forge
### Describe the feature you would like
As of 0.8.4, solidity is depreciating the experimental pragma for SMT Checker as it runs on all files if enabled
```solidity
pragma experimental SMTChecker;
```
The new way to configure this is through the JSON config file
```json
"settings.modelChecker.targets": ["underflow", "overflow"]
```
You have to define which _engine_ it should run as well, [see https://docs.soliditylang.org/en/v0.8.11/smtchecker.html#smtchecker-engines](https://docs.soliditylang.org/en/v0.8.11/smtchecker.html#smtchecker-engines)
```json
"settings.modelChecker.solvers": ["smtlib2","z3"]
```
```
settings.modelChecker.targets=<targets>
```
And what targets to check
```
: --model-checker-targets assert,overflow
```
These options are not configurable currently for forge
Additional benefits would be when using the yul optimizer you would have access to the `ReasoningBasedSimplifier`
### Additional context
Additionally custom natspec parsing would help in tests
```js
/// @custom:smtchecker
```
maybe can be expanded to slither other tools, etc | non_main | feature smtchecker support a k a support all possible outputs component forge describe the feature you would like as of solidity is depreciating the experimental pragma for smt checker as it runs on all files if enabled solidity pragma experimental smtchecker the new way to configure this is through the json config file json settings modelchecker targets you have to define which engine it should run as well json settings modelchecker solvers settings modelchecker targets and what targets to check model checker targets assert overflow these options are not configurable currently for forge additional benefits would be when using the yul optimizer you would have access to the reasoningbasedsimplifier additional context additionally custom natspec parsing would help in tests js custom smtchecker maybe can be expanded to slither other tools etc | 0 |
290,497 | 21,882,612,196 | IssuesEvent | 2022-05-19 15:30:43 | KristyNerhaugen/password-generator | https://api.github.com/repos/KristyNerhaugen/password-generator | closed | Character Types | documentation | WHEN prompted for character types to include in the password
THEN I choose lowercase, uppercase, numeric, and/or special characters | 1.0 | Character Types - WHEN prompted for character types to include in the password
THEN I choose lowercase, uppercase, numeric, and/or special characters | non_main | character types when prompted for character types to include in the password then i choose lowercase uppercase numeric and or special characters | 0 |
4,909 | 25,249,758,264 | IssuesEvent | 2022-11-15 13:53:17 | precice/precice | https://api.github.com/repos/precice/precice | closed | Implement constants of actions as different data type (e.g. enum class instead of std::string) | enhancement maintainability | I propose the change of the API for clearer and cleaner to move from `std::string` as identifier for actions to `enum class` or something similar in [`src/cplscheme/Constants.cpp`](https://github.com/precice/precice/blob/bc27c26d195185e61663f580d469399c6587010f/src/cplscheme/Constants.cpp).
Major improvements:
- It limits the number of available actions in a "natural" way. An `enum class` can only take certain, pre-defined values while the `std::string` could take arbitrary values of arbitrary length.
- It separates description of what it is (an action) and what it does (writes initial data, e.g.) in a clean way. The `enum class` is called `Actions` and its members/values describe the actual action performed/checked for.
- Minimizes risk of forgetting to account for an action. One can use the `switch` statement and the compiler has the chance to warn you about handing other actions. When using the definition of `Actions` mentioned below the following code would produce a compiler warning:
```c++
bool isActionRequired( const constants::Actions action) const {
switch (action) {
case Actions::writeInitialData: { //do something }
//Oh no! We forgot to handle other actions.
}
}
```
Minor improvements:
- Comparison should be cheaper than for strings (probably not a performance concern at the moment)
- For me personally it feels more correct in C++ to use `enum class`.
Drawback:
It will break compatibility with the API and thus is probably something for preCICE 3.0. However, one could introduce it in a future release to enable both, the `std::string`-based and the `enum class`-based version and phase out the old approach slowly.
The solution I would propose (I am playing with it myself at the moment) would look like that:
```c++
enum class Actions {
writeInitialData,
writeIterationCheckpoint,
readIterationCheckpoint
};
``` | True | Implement constants of actions as different data type (e.g. enum class instead of std::string) - I propose the change of the API for clearer and cleaner to move from `std::string` as identifier for actions to `enum class` or something similar in [`src/cplscheme/Constants.cpp`](https://github.com/precice/precice/blob/bc27c26d195185e61663f580d469399c6587010f/src/cplscheme/Constants.cpp).
Major improvements:
- It limits the number of available actions in a "natural" way. An `enum class` can only take certain, pre-defined values while the `std::string` could take arbitrary values of arbitrary length.
- It separates description of what it is (an action) and what it does (writes initial data, e.g.) in a clean way. The `enum class` is called `Actions` and its members/values describe the actual action performed/checked for.
- Minimizes risk of forgetting to account for an action. One can use the `switch` statement and the compiler has the chance to warn you about handing other actions. When using the definition of `Actions` mentioned below the following code would produce a compiler warning:
```c++
bool isActionRequired( const constants::Actions action) const {
switch (action) {
case Actions::writeInitialData: { //do something }
//Oh no! We forgot to handle other actions.
}
}
```
Minor improvements:
- Comparison should be cheaper than for strings (probably not a performance concern at the moment)
- For me personally it feels more correct in C++ to use `enum class`.
Drawback:
It will break compatibility with the API and thus is probably something for preCICE 3.0. However, one could introduce it in a future release to enable both, the `std::string`-based and the `enum class`-based version and phase out the old approach slowly.
The solution I would propose (I am playing with it myself at the moment) would look like that:
```c++
enum class Actions {
writeInitialData,
writeIterationCheckpoint,
readIterationCheckpoint
};
``` | main | implement constants of actions as different data type e g enum class instead of std string i propose the change of the api for clearer and cleaner to move from std string as identifier for actions to enum class or something similar in major improvements it limits the number of available actions in a natural way an enum class can only take certain pre defined values while the std string could take arbitrary values of arbitrary length it separates description of what it is an action and what it does writes initial data e g in a clean way the enum class is called actions and its members values describe the actual action performed checked for minimizes risk of forgetting to account for an action one can use the switch statement and the compiler has the chance to warn you about handing other actions when using the definition of actions mentioned below the following code would produce a compiler warning c bool isactionrequired const constants actions action const switch action case actions writeinitialdata do something oh no we forgot to handle other actions minor improvements comparison should be cheaper than for strings probably not a performance concern at the moment for me personally it feels more correct in c to use enum class drawback it will break compatibility with the api and thus is probably something for precice however one could introduce it in a future release to enable both the std string based and the enum class based version and phase out the old approach slowly the solution i would propose i am playing with it myself at the moment would look like that c enum class actions writeinitialdata writeiterationcheckpoint readiterationcheckpoint | 1 |
5,829 | 21,333,609,143 | IssuesEvent | 2022-04-18 11:51:45 | red-hat-storage/ocs-ci | https://api.github.com/repos/red-hat-storage/ocs-ci | opened | Wait for project selection is namespace drop-down on OCP console | bug ui_automation | Issue seen with Run `1649886578`
Also take screenshots on important pages. | 1.0 | Wait for project selection is namespace drop-down on OCP console - Issue seen with Run `1649886578`
Also take screenshots on important pages. | non_main | wait for project selection is namespace drop down on ocp console issue seen with run also take screenshots on important pages | 0 |
798 | 4,415,180,075 | IssuesEvent | 2016-08-13 22:26:43 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | Optional Parameter called Source for win_feature | feature_idea waiting_on_maintainer windows | Issue Type:
Feature Idea
Component Name:
win_feature
Ansible Version: 1.9.2
Ansible Configuration:
Stock install with Extra's modules.
Environment:
Windows 2012 R2
Summary:
When trying to add a new feature like DotNet 3.5 core (feature is called 'NET-Framework-Core') to Win 2012 R2, it fails because 'NET-Framework-Core' did not exist on the box natively or was removed by a Windows Security Update. For it to install successfully, you need to be able to pass an argument called "source" to win_feature ie: D:\sources\sxs or \\IP\Share\sources\sxs so that it can pass it onto the cmdlet install-windowsfeature.
Steps To Reproduce:
On a Windows 2012 R2 Machine without DotNet 3.5 source, run the following:
ansible -m win_feature -a "name=NET-Framework-Core" windowsvms
Expected Results:
TASK: [Install DotNet Framework 3.5 Feature] **********************************
ok: [site-06]
ok: [site-05]
changed: [site-07]
Actual Results:
TASK: [Install DotNet Framework 3.5 Feature] **********************************
ok: [site-06]
ok: [site-05]
failed: [site-07] => {"changed": false, "exitcode": "Failed", "failed": true, "feature_result": [], "restart_needed": false, "success": false}
msg: Failed to add feature
If I get time tonight I will fix win_feature.ps1 and post the changes. | True | Optional Parameter called Source for win_feature - Issue Type:
Feature Idea
Component Name:
win_feature
Ansible Version: 1.9.2
Ansible Configuration:
Stock install with Extra's modules.
Environment:
Windows 2012 R2
Summary:
When trying to add a new feature like DotNet 3.5 core (feature is called 'NET-Framework-Core') to Win 2012 R2, it fails because 'NET-Framework-Core' did not exist on the box natively or was removed by a Windows Security Update. For it to install successfully, you need to be able to pass an argument called "source" to win_feature ie: D:\sources\sxs or \\IP\Share\sources\sxs so that it can pass it onto the cmdlet install-windowsfeature.
Steps To Reproduce:
On a Windows 2012 R2 Machine without DotNet 3.5 source, run the following:
ansible -m win_feature -a "name=NET-Framework-Core" windowsvms
Expected Results:
TASK: [Install DotNet Framework 3.5 Feature] **********************************
ok: [site-06]
ok: [site-05]
changed: [site-07]
Actual Results:
TASK: [Install DotNet Framework 3.5 Feature] **********************************
ok: [site-06]
ok: [site-05]
failed: [site-07] => {"changed": false, "exitcode": "Failed", "failed": true, "feature_result": [], "restart_needed": false, "success": false}
msg: Failed to add feature
If I get time tonight I will fix win_feature.ps1 and post the changes. | main | optional parameter called source for win feature issue type feature idea component name win feature ansible version ansible configuration stock install with extra s modules environment windows summary when trying to add a new feature like dotnet core feature is called net framework core to win it fails because net framework core did not exist on the box natively or was removed by a windows security update for it to install successfully you need to be able to pass an argument called source to win feature ie d sources sxs or ip share sources sxs so that it can pass it onto the cmdlet install windowsfeature steps to reproduce on a windows machine without dotnet source run the following ansible m win feature a name net framework core windowsvms expected results task ok ok changed actual results task ok ok failed changed false exitcode failed failed true feature result restart needed false success false msg failed to add feature if i get time tonight i will fix win feature and post the changes | 1 |
760 | 4,357,366,964 | IssuesEvent | 2016-08-02 01:25:35 | duckduckgo/zeroclickinfo-goodies | https://api.github.com/repos/duckduckgo/zeroclickinfo-goodies | closed | Dice: "Roll a dice" search doesn't trigger Instant Answer | Improvement Maintainer Approved Needs a Developer PR Received | Even though 'Dice' is the plural of 'Die' and "roll 'a' dice" is not entirely valid, 'roll a dice' should return the result of "roll a die".
------
IA Page: http://duck.co/ia/view/dice
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @loganom | True | Dice: "Roll a dice" search doesn't trigger Instant Answer - Even though 'Dice' is the plural of 'Die' and "roll 'a' dice" is not entirely valid, 'roll a dice' should return the result of "roll a die".
------
IA Page: http://duck.co/ia/view/dice
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @loganom | main | dice roll a dice search doesn t trigger instant answer even though dice is the plural of die and roll a dice is not entirely valid roll a dice should return the result of roll a die ia page loganom | 1 |
9,355 | 11,403,458,997 | IssuesEvent | 2020-01-31 07:16:16 | jrabbit/pyborg-1up | https://api.github.com/repos/jrabbit/pyborg-1up | closed | pip installs a newer aiohttp | Compatibility Packaging | ref #106
on stock pip on `python:3.8@2d2ae8451803` on docker
`ERROR: discord-py 1.3.0 has requirement aiohttp<3.7.0,>=3.6.0, but you'll have aiohttp 4.0.0a1 which is incompatible.` | True | pip installs a newer aiohttp - ref #106
on stock pip on `python:3.8@2d2ae8451803` on docker
`ERROR: discord-py 1.3.0 has requirement aiohttp<3.7.0,>=3.6.0, but you'll have aiohttp 4.0.0a1 which is incompatible.` | non_main | pip installs a newer aiohttp ref on stock pip on python on docker error discord py has requirement aiohttp but you ll have aiohttp which is incompatible | 0 |
22 | 2,523,647,398 | IssuesEvent | 2015-01-20 12:25:11 | simplesamlphp/simplesamlphp | https://api.github.com/repos/simplesamlphp/simplesamlphp | closed | Cleanup the SimpleSAML_Session class | enhancement maintainability started | The following must be done:
* Remove the default `NULL` value for the `$authority` parameter in the `getAuthState()` method. :+1:
* Remove the `getAttribute()` method. :+1:
* Remove the `setAttribute()` method. :+1:
* Remove the `setAttributes()` method. :+1:
* Remove the `getInstance()` method. :+1:
* Refactor. Should be renamed to `getCurrentSession()`. :+1: (see comment below)
* The error handling code should disappear.
* `session.disable_fallback` defaults to TRUE and goes away (exception is always thrown)
* Some functionality to avoid recursive loops, maybe solved in `Logger::getTracktId()`.
* Cleanup session initialization. Two flows:
* load by ID
* load by `$_REQUEST`
* Remove the `getAuthority()` method. :+1:
* Remove the `getAuthnRequest()` method. :+1:
* Remove the `setAuthnRequest()` method. :+1:
* Remove the `getIdP()` method. :+1:
* Remove the `setIdP()` method. :+1:
* Remove the `getSessionIndex()` method. :+1:
* Remove the `setSessionIndex()` method. :+1:
* Remove the `getNameId()` method. :+1:
* Remove the `setNameId()` method. :+1:
* Remove the `setSessionDuration()` method. :+1:
* Remove the `remainingTime()` method. :+1:
* Remove the `isAuthenticated()` method. :+1:
* Remove the `getAuthInstant()` method. :+1:
* Remove the `getAttributes()` method. :+1:
* Remove the `getSize()` method. :+1:
* Remove the `get_sp_list()` method. :+1:
* Remove the `expireDataLogout()` method. :+1:
* Remove the `getLogoutState()` and `setLogoutState()` methods. If there's callers, change the call with a direct access to `$state['LogoutState']`. :+1:
* Remove the `$authority` property. :+1:
* Remove the `DATA_TIMEOUT_LOGOUT` constant. Check dependencies in: :+1:
* `lib/SimpleSAML/Auth/Source.php` and :+1:
* `lib/SimpleSAML/IdP.php` :+1:
* Modify the `registerLogoutHandler()` method to add the `$authority` as a parameter. This is a previous step to wrapping this functionality into `SimpleSAML_Auth_Simple`. :+1: | True | Cleanup the SimpleSAML_Session class - The following must be done:
* Remove the default `NULL` value for the `$authority` parameter in the `getAuthState()` method. :+1:
* Remove the `getAttribute()` method. :+1:
* Remove the `setAttribute()` method. :+1:
* Remove the `setAttributes()` method. :+1:
* Remove the `getInstance()` method. :+1:
* Refactor. Should be renamed to `getCurrentSession()`. :+1: (see comment below)
* The error handling code should disappear.
* `session.disable_fallback` defaults to TRUE and goes away (exception is always thrown)
* Some functionality to avoid recursive loops, maybe solved in `Logger::getTracktId()`.
* Cleanup session initialization. Two flows:
* load by ID
* load by `$_REQUEST`
* Remove the `getAuthority()` method. :+1:
* Remove the `getAuthnRequest()` method. :+1:
* Remove the `setAuthnRequest()` method. :+1:
* Remove the `getIdP()` method. :+1:
* Remove the `setIdP()` method. :+1:
* Remove the `getSessionIndex()` method. :+1:
* Remove the `setSessionIndex()` method. :+1:
* Remove the `getNameId()` method. :+1:
* Remove the `setNameId()` method. :+1:
* Remove the `setSessionDuration()` method. :+1:
* Remove the `remainingTime()` method. :+1:
* Remove the `isAuthenticated()` method. :+1:
* Remove the `getAuthInstant()` method. :+1:
* Remove the `getAttributes()` method. :+1:
* Remove the `getSize()` method. :+1:
* Remove the `get_sp_list()` method. :+1:
* Remove the `expireDataLogout()` method. :+1:
* Remove the `getLogoutState()` and `setLogoutState()` methods. If there's callers, change the call with a direct access to `$state['LogoutState']`. :+1:
* Remove the `$authority` property. :+1:
* Remove the `DATA_TIMEOUT_LOGOUT` constant. Check dependencies in: :+1:
* `lib/SimpleSAML/Auth/Source.php` and :+1:
* `lib/SimpleSAML/IdP.php` :+1:
* Modify the `registerLogoutHandler()` method to add the `$authority` as a parameter. This is a previous step to wrapping this functionality into `SimpleSAML_Auth_Simple`. :+1: | main | cleanup the simplesaml session class the following must be done remove the default null value for the authority parameter in the getauthstate method remove the getattribute method remove the setattribute method remove the setattributes method remove the getinstance method refactor should be renamed to getcurrentsession see comment below the error handling code should disappear session disable fallback defaults to true and goes away exception is always thrown some functionality to avoid recursive loops maybe solved in logger gettracktid cleanup session initialization two flows load by id load by request remove the getauthority method remove the getauthnrequest method remove the setauthnrequest method remove the getidp method remove the setidp method remove the getsessionindex method remove the setsessionindex method remove the getnameid method remove the setnameid method remove the setsessionduration method remove the remainingtime method remove the isauthenticated method remove the getauthinstant method remove the getattributes method remove the getsize method remove the get sp list method remove the expiredatalogout method remove the getlogoutstate and setlogoutstate methods if there s callers change the call with a direct access to state remove the authority property remove the data timeout logout constant check dependencies in lib simplesaml auth source php and lib simplesaml idp php modify the registerlogouthandler method to add the authority as a parameter this is a previous step to wrapping this functionality into simplesaml auth simple | 1 |
5,627 | 28,151,993,885 | IssuesEvent | 2023-04-03 02:41:20 | medic/cht-roadmap | https://api.github.com/repos/medic/cht-roadmap | closed | Server monitoring | strat: Large CHT systems maintainable by admins | Make it easy for self and Medic hosted deployments to monitor resources and statuses of CHT instances. The monitoring falls into three main categories:
1. Generic stats from container like CPU usage, memory usage, disk space, network transfers, etc. This should be available through third party tools.
2. CouchDB stats like fragmentation, requests, response error rates, doc conflicts, etc. Look for third party tools for this, eg: https://github.com/gesellix/couchdb-prometheus-exporter
3. CHT specific stats like messaging queues, sentinel backlog, feedback docs, etc. This can be sourced from [this api](https://docs.communityhealthtoolkit.org/apps/reference/api/#get-apiv2monitoring).
Regardless of how this is implemented they should be all shown in one dashboard for ease of use. | True | Server monitoring - Make it easy for self and Medic hosted deployments to monitor resources and statuses of CHT instances. The monitoring falls into three main categories:
1. Generic stats from container like CPU usage, memory usage, disk space, network transfers, etc. This should be available through third party tools.
2. CouchDB stats like fragmentation, requests, response error rates, doc conflicts, etc. Look for third party tools for this, eg: https://github.com/gesellix/couchdb-prometheus-exporter
3. CHT specific stats like messaging queues, sentinel backlog, feedback docs, etc. This can be sourced from [this api](https://docs.communityhealthtoolkit.org/apps/reference/api/#get-apiv2monitoring).
Regardless of how this is implemented they should be all shown in one dashboard for ease of use. | main | server monitoring make it easy for self and medic hosted deployments to monitor resources and statuses of cht instances the monitoring falls into three main categories generic stats from container like cpu usage memory usage disk space network transfers etc this should be available through third party tools couchdb stats like fragmentation requests response error rates doc conflicts etc look for third party tools for this eg cht specific stats like messaging queues sentinel backlog feedback docs etc this can be sourced from regardless of how this is implemented they should be all shown in one dashboard for ease of use | 1 |
3,619 | 14,630,516,979 | IssuesEvent | 2020-12-23 17:52:38 | umn-asr/courses | https://api.github.com/repos/umn-asr/courses | opened | Update README with development section | courses maintainability | Add a development section with these subsections:
- [ ] setup
- [ ] testing
- [ ] deployment | True | Update README with development section - Add a development section with these subsections:
- [ ] setup
- [ ] testing
- [ ] deployment | main | update readme with development section add a development section with these subsections setup testing deployment | 1 |
3,318 | 12,876,787,322 | IssuesEvent | 2020-07-11 07:09:25 | geolexica/geolexica-server | https://api.github.com/repos/geolexica/geolexica-server | opened | Term languages are hardcoded in search | maintainability | https://github.com/geolexica/geolexica-server/blob/189aabee07651fcc8d453131ec209dd5498b4054/assets/js/concept-search-worker.js#L5-L18
Note that many of these codes are actually incorrect, and that many are missing. It probably explains #105. Anyway, they should be taken from site configuration. | True | Term languages are hardcoded in search - https://github.com/geolexica/geolexica-server/blob/189aabee07651fcc8d453131ec209dd5498b4054/assets/js/concept-search-worker.js#L5-L18
Note that many of these codes are actually incorrect, and that many are missing. It probably explains #105. Anyway, they should be taken from site configuration. | main | term languages are hardcoded in search note that many of these codes are actually incorrect and that many are missing it probably explains anyway they should be taken from site configuration | 1 |
84,766 | 10,417,820,304 | IssuesEvent | 2019-09-15 01:58:47 | golang/go | https://api.github.com/repos/golang/go | closed | net/http: Content-Length is not set in outgoing request when using ioutil.NopCloser | Documentation | <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/1041775/Library/Caches/go-build"
GOENV="/Users/1041775/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/1041775/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.13/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.13/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/1041775/projects/js-scripts/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/d6/809nhvwd23nd_wryrw60j6hdms016p/T/go-build703352534=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
Start a httpbin server locally.
<pre>
docker run -p 80:80 kennethreitz/httpbin
</pre>
Run the following program
```go
package main
import (
"bytes"
"io/ioutil"
"log"
"net/http"
)
func main() {
reqBody := ioutil.NopCloser(bytes.NewBufferString(`{}`))
req, err := http.NewRequest("POST", "http://localhost:80/post", reqBody)
if err != nil {
log.Fatalf("Cannot create request: %v", err)
}
res, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatalf("Cannot do: %v", err)
}
defer res.Body.Close()
resBody, err := ioutil.ReadAll(res.Body)
if err != nil {
log.Fatalf("Cannot read body: %v", err)
}
log.Printf("Response Body: %s", resBody)
}
```
### What did you expect to see?
Content-Length header is set when it is received by the server.
### What did you see instead?
Content-Length header is missing when it is received by the server.
<pre>
2019/09/14 12:55:00 Response Body: {
"args": {},
"data": "{}",
"files": {},
"form": {},
"headers": {
"Accept-Encoding": "gzip",
"Host": "localhost:80",
"Transfer-Encoding": "chunked",
"User-Agent": "Go-http-client/1.1"
},
"json": {},
"origin": "172.17.0.1",
"url": "http://localhost:80/post"
}
</pre>
Versus what I would receive if I use <pre>reqBody := bytes.NewBufferString(`{}`)</pre>.
<pre>
2019/09/14 12:55:22 Response Body: {
"args": {},
"data": "{}",
"files": {},
"form": {},
"headers": {
"Accept-Encoding": "gzip",
"Content-Length": "2",
"Host": "localhost:80",
"User-Agent": "Go-http-client/1.1"
},
"json": {},
"origin": "172.17.0.1",
"url": "http://localhost:80/post"
}
</pre>
| 1.0 | net/http: Content-Length is not set in outgoing request when using ioutil.NopCloser - <!-- Please answer these questions before submitting your issue. Thanks! -->
### What version of Go are you using (`go version`)?
<pre>
$ go version
go version go1.13 darwin/amd64
</pre>
### Does this issue reproduce with the latest release?
Yes
### What operating system and processor architecture are you using (`go env`)?
<details><summary><code>go env</code> Output</summary><br><pre>
$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/1041775/Library/Caches/go-build"
GOENV="/Users/1041775/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/1041775/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/Cellar/go/1.13/libexec"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.13/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/1041775/projects/js-scripts/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/d6/809nhvwd23nd_wryrw60j6hdms016p/T/go-build703352534=/tmp/go-build -gno-record-gcc-switches -fno-common"
</pre></details>
### What did you do?
<!--
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
-->
Start a httpbin server locally.
<pre>
docker run -p 80:80 kennethreitz/httpbin
</pre>
Run the following program
```go
package main
import (
"bytes"
"io/ioutil"
"log"
"net/http"
)
func main() {
reqBody := ioutil.NopCloser(bytes.NewBufferString(`{}`))
req, err := http.NewRequest("POST", "http://localhost:80/post", reqBody)
if err != nil {
log.Fatalf("Cannot create request: %v", err)
}
res, err := http.DefaultClient.Do(req)
if err != nil {
log.Fatalf("Cannot do: %v", err)
}
defer res.Body.Close()
resBody, err := ioutil.ReadAll(res.Body)
if err != nil {
log.Fatalf("Cannot read body: %v", err)
}
log.Printf("Response Body: %s", resBody)
}
```
### What did you expect to see?
Content-Length header is set when it is received by the server.
### What did you see instead?
Content-Length header is missing when it is received by the server.
<pre>
2019/09/14 12:55:00 Response Body: {
"args": {},
"data": "{}",
"files": {},
"form": {},
"headers": {
"Accept-Encoding": "gzip",
"Host": "localhost:80",
"Transfer-Encoding": "chunked",
"User-Agent": "Go-http-client/1.1"
},
"json": {},
"origin": "172.17.0.1",
"url": "http://localhost:80/post"
}
</pre>
Versus what I would receive if I use <pre>reqBody := bytes.NewBufferString(`{}`)</pre>.
<pre>
2019/09/14 12:55:22 Response Body: {
"args": {},
"data": "{}",
"files": {},
"form": {},
"headers": {
"Accept-Encoding": "gzip",
"Content-Length": "2",
"Host": "localhost:80",
"User-Agent": "Go-http-client/1.1"
},
"json": {},
"origin": "172.17.0.1",
"url": "http://localhost:80/post"
}
</pre>
| non_main | net http content length is not set in outgoing request when using ioutil nopcloser what version of go are you using go version go version go version darwin does this issue reproduce with the latest release yes what operating system and processor architecture are you using go env go env output go env goarch gobin gocache users library caches go build goenv users library application support go env goexe goflags gohostarch gohostos darwin gonoproxy gonosumdb goos darwin gopath users go goprivate goproxy goroot usr local cellar go libexec gosumdb sum golang org gotmpdir gotooldir usr local cellar go libexec pkg tool darwin gccgo gccgo ar ar cc clang cxx clang cgo enabled gomod users projects js scripts go mod cgo cflags g cgo cppflags cgo cxxflags g cgo fflags g cgo ldflags g pkg config pkg config gogccflags fpic pthread fno caret diagnostics qunused arguments fmessage length fdebug prefix map var folders t go tmp go build gno record gcc switches fno common what did you do if possible provide a recipe for reproducing the error a complete runnable program is good a link on play golang org is best start a httpbin server locally docker run p kennethreitz httpbin run the following program go package main import bytes io ioutil log net http func main reqbody ioutil nopcloser bytes newbufferstring req err http newrequest post reqbody if err nil log fatalf cannot create request v err res err http defaultclient do req if err nil log fatalf cannot do v err defer res body close resbody err ioutil readall res body if err nil log fatalf cannot read body v err log printf response body s resbody what did you expect to see content length header is set when it is received by the server what did you see instead content length header is missing when it is received by the server response body args data files form headers accept encoding gzip host localhost transfer encoding chunked user agent go http client json origin url versus what i would receive if i use reqbody bytes newbufferstring response body args data files form headers accept encoding gzip content length host localhost user agent go http client json origin url | 0 |
4,735 | 24,447,046,596 | IssuesEvent | 2022-10-06 18:55:40 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Contrib Group Application: markabur (port of emptyparagraphkiller) | Maintainer application Port complete | Hello and welcome to the contrib application process! We're happy to have you :)
**Please indicate how you intend to help the Backdrop community by joining this group**
Option 1: I would like to contribute a project
## Based on your selection above, please provide the following information:
**(option 1) The name of your module, theme, or layout**
Empty paragraph killer
## (option 1) Please note these 3 requirements for new contrib projects:
- [x] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [x] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [x] If porting a Drupal 7 project, Maintain the Git history from Drupal.
**(option 1 -- optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
https://www.drupal.org/project/emptyparagraphkiller/issues/3312732
**Post a link to your new Backdrop project under your own GitHub account (option 1)**
https://github.com/markabur/emptyparagraphkiller
**If you have chosen option 2 or 1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
<!-- (option 1) Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. -->
<!-- (option 1) Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. -->
| True | Contrib Group Application: markabur (port of emptyparagraphkiller) - Hello and welcome to the contrib application process! We're happy to have you :)
**Please indicate how you intend to help the Backdrop community by joining this group**
Option 1: I would like to contribute a project
## Based on your selection above, please provide the following information:
**(option 1) The name of your module, theme, or layout**
Empty paragraph killer
## (option 1) Please note these 3 requirements for new contrib projects:
- [x] Include a README.md file containing license and maintainer information.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/README.md
- [x] Include a LICENSE.txt file.
You can use this example: https://raw.githubusercontent.com/backdrop-ops/contrib/master/examples/LICENSE.txt.
- [x] If porting a Drupal 7 project, Maintain the Git history from Drupal.
**(option 1 -- optional) Post a link here to an issue in the drupal.org queue notifying the Drupal 7 maintainers that you are working on a Backdrop port of their project**
https://www.drupal.org/project/emptyparagraphkiller/issues/3312732
**Post a link to your new Backdrop project under your own GitHub account (option 1)**
https://github.com/markabur/emptyparagraphkiller
**If you have chosen option 2 or 1 above, do you agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement)**
YES
<!-- (option 1) Once we have a chance to review your project, we will check for the 3 requirements at the top of this issue. If those requirements are met, you will be invited to the @backdrop-contrib group. At that point you will be able to transfer the project. -->
<!-- (option 1) Please note that we may also include additional feedback in the code review, but anything else is only intended to be helpful, and is NOT a requirement for joining the contrib group. -->
| main | contrib group application markabur port of emptyparagraphkiller hello and welcome to the contrib application process we re happy to have you please indicate how you intend to help the backdrop community by joining this group option i would like to contribute a project based on your selection above please provide the following information option the name of your module theme or layout empty paragraph killer option please note these requirements for new contrib projects include a readme md file containing license and maintainer information you can use this example include a license txt file you can use this example if porting a drupal project maintain the git history from drupal option optional post a link here to an issue in the drupal org queue notifying the drupal maintainers that you are working on a backdrop port of their project post a link to your new backdrop project under your own github account option if you have chosen option or above do you agree to the yes | 1 |
55,851 | 23,617,448,567 | IssuesEvent | 2022-08-24 17:08:20 | operate-first/apps | https://api.github.com/repos/operate-first/apps | closed | Fybrik CRDs preventing Smaug Cluster-Resources app from syncing successfully | kind/bug area/service/argocd | ArgoCD App: https://argocd.operate-first.cloud/applications/cluster-resources-smaug?resource=sync%3AOutOfSync&operation=true
Example sync error for resources:
CustomResourceDefinition.apiextensions.k8s.io "plotters.app.fybrik.io" is invalid: status.storedVersions[0]: Invalid value: "v1alpha1": must appear in spec.versions
It is complaining about: https://github.com/operate-first/apps/blob/master/cluster-scope/base/apiextensions.k8s.io/customresourcedefinitions/plotters.app.fybrik.io/customresourcedefinition.yaml#L499
The PR in question: https://github.com/operate-first/apps/pull/2220
The kubeval test seemed to indicate passing before I merged it. But seemed to leave another comment after merge indicating it failed, logs are expired so I'm not sure if it as complaining about this particular issue.
| 1.0 | Fybrik CRDs preventing Smaug Cluster-Resources app from syncing successfully - ArgoCD App: https://argocd.operate-first.cloud/applications/cluster-resources-smaug?resource=sync%3AOutOfSync&operation=true
Example sync error for resources:
CustomResourceDefinition.apiextensions.k8s.io "plotters.app.fybrik.io" is invalid: status.storedVersions[0]: Invalid value: "v1alpha1": must appear in spec.versions
It is complaining about: https://github.com/operate-first/apps/blob/master/cluster-scope/base/apiextensions.k8s.io/customresourcedefinitions/plotters.app.fybrik.io/customresourcedefinition.yaml#L499
The PR in question: https://github.com/operate-first/apps/pull/2220
The kubeval test seemed to indicate passing before I merged it. But seemed to leave another comment after merge indicating it failed, logs are expired so I'm not sure if it as complaining about this particular issue.
| non_main | fybrik crds preventing smaug cluster resources app from syncing successfully argocd app example sync error for resources customresourcedefinition apiextensions io plotters app fybrik io is invalid status storedversions invalid value must appear in spec versions it is complaining about the pr in question the kubeval test seemed to indicate passing before i merged it but seemed to leave another comment after merge indicating it failed logs are expired so i m not sure if it as complaining about this particular issue | 0 |
1,926 | 6,598,715,917 | IssuesEvent | 2017-09-16 09:45:15 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | closed | Feature request: `audit --download` should check if app name is correct | awaiting maintainer feedback | ### Description of feature/enhancement
`brew cask audit --download cask` should check if a dmg contains an app like described in the Cask.
### Justification
We are already downloading the dmg to check the checksum, it should be easy to mount the dmg (the source code is there) and check if the app exists.
### Example use case
For example, in this MR the Name of the app changed:
https://github.com/caskroom/homebrew-cask/pull/37346/files
Travis would have shown a green build, if I hadn't change the app name. A "careless" merge would have broken the Cask for everyone, as the app could not be copied during install.
- - -
I would implement it myself, but I have a real hard time to get started with ruby and I think it is a small feature for the hbc developers. Thank you, as always for your hard work.
| True | Feature request: `audit --download` should check if app name is correct - ### Description of feature/enhancement
`brew cask audit --download cask` should check if a dmg contains an app like described in the Cask.
### Justification
We are already downloading the dmg to check the checksum, it should be easy to mount the dmg (the source code is there) and check if the app exists.
### Example use case
For example, in this MR the Name of the app changed:
https://github.com/caskroom/homebrew-cask/pull/37346/files
Travis would have shown a green build, if I hadn't change the app name. A "careless" merge would have broken the Cask for everyone, as the app could not be copied during install.
- - -
I would implement it myself, but I have a real hard time to get started with ruby and I think it is a small feature for the hbc developers. Thank you, as always for your hard work.
| main | feature request audit download should check if app name is correct description of feature enhancement brew cask audit download cask should check if a dmg contains an app like described in the cask justification we are already downloading the dmg to check the checksum it should be easy to mount the dmg the source code is there and check if the app exists example use case for example in this mr the name of the app changed travis would have shown a green build if i hadn t change the app name a careless merge would have broken the cask for everyone as the app could not be copied during install i would implement it myself but i have a real hard time to get started with ruby and i think it is a small feature for the hbc developers thank you as always for your hard work | 1 |
139,560 | 20,910,711,264 | IssuesEvent | 2022-03-24 09:03:33 | ASE-Projekte-WS-2021/ase-ws-21-zusammenleben | https://api.github.com/repos/ASE-Projekte-WS-2021/ase-ws-21-zusammenleben | opened | Design Prototype | important Design | Prototype of finished UI Screens (Color Theme, Fonts, Icon Style, Button Style etc.) | 1.0 | Design Prototype - Prototype of finished UI Screens (Color Theme, Fonts, Icon Style, Button Style etc.) | non_main | design prototype prototype of finished ui screens color theme fonts icon style button style etc | 0 |
50,594 | 6,403,861,355 | IssuesEvent | 2017-08-06 22:34:11 | a8cteam51/strikestart | https://api.github.com/repos/a8cteam51/strikestart | opened | On larger screens, consider moving navigation to the left-hand side | design enhancement low-priority | Might be a more effective use of space.... | 1.0 | On larger screens, consider moving navigation to the left-hand side - Might be a more effective use of space.... | non_main | on larger screens consider moving navigation to the left hand side might be a more effective use of space | 0 |
142,302 | 21,713,322,806 | IssuesEvent | 2022-05-10 15:32:30 | gordon-cs/gordon-360-ui | https://api.github.com/repos/gordon-cs/gordon-360-ui | opened | Use Tabbed UI on Involvements page | Enhancement Visual Design | Currently, the Involvements page has three cards, stacked vertically, with different views of involvements data:
1. Requests (both sent and received)
2. My Involvements
3. All Involvements
A user is generally only interested in one of these cards, and they're never interested in more than one at a time. For that reason, I think a tabbed UI would be a cleaner, more user-friendly interface for this page. | 1.0 | Use Tabbed UI on Involvements page - Currently, the Involvements page has three cards, stacked vertically, with different views of involvements data:
1. Requests (both sent and received)
2. My Involvements
3. All Involvements
A user is generally only interested in one of these cards, and they're never interested in more than one at a time. For that reason, I think a tabbed UI would be a cleaner, more user-friendly interface for this page. | non_main | use tabbed ui on involvements page currently the involvements page has three cards stacked vertically with different views of involvements data requests both sent and received my involvements all involvements a user is generally only interested in one of these cards and they re never interested in more than one at a time for that reason i think a tabbed ui would be a cleaner more user friendly interface for this page | 0 |
347,993 | 10,437,195,970 | IssuesEvent | 2019-09-17 21:23:20 | osulp/Scholars-Archive | https://api.github.com/repos/osulp/Scholars-Archive | closed | Remove extra space in Visibility box | Priority: Low User Interface | ### Descriptive summary
There's an extra space before the bulleted terms begin.
### Expected behavior
Space should not be there.

| 1.0 | Remove extra space in Visibility box - ### Descriptive summary
There's an extra space before the bulleted terms begin.
### Expected behavior
Space should not be there.

| non_main | remove extra space in visibility box descriptive summary there s an extra space before the bulleted terms begin expected behavior space should not be there | 0 |
5,152 | 26,252,517,782 | IssuesEvent | 2023-01-05 20:43:04 | chocolatey-community/chocolatey-package-requests | https://api.github.com/repos/chocolatey-community/chocolatey-package-requests | closed | RFM - plexmediaserver | Status: Available For Maintainer(s) | ## Current Maintainer
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://community.chocolatey.org/packages/plexmediaserver
Package source URL: https://github.com/mikecole/chocolatey-packages/tree/master/automatic/plexmediaserver
This is a working package with a functional AU script. I simply don't have the capacity to keep it updated as it is a popular package. Currently, there are a handful of requests to add the 64-bit version and I have not been able to field these requests. I will help as much as I can to transfer ownership. | True | RFM - plexmediaserver - ## Current Maintainer
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://community.chocolatey.org/packages/plexmediaserver
Package source URL: https://github.com/mikecole/chocolatey-packages/tree/master/automatic/plexmediaserver
This is a working package with a functional AU script. I simply don't have the capacity to keep it updated as it is a popular package. Currently, there are a handful of requests to add the 64-bit version and I have not been able to field these requests. I will help as much as I can to transfer ownership. | main | rfm plexmediaserver current maintainer i am the maintainer of the package and wish to pass it to someone else checklist issue title starts with rfm existing package details package url package source url this is a working package with a functional au script i simply don t have the capacity to keep it updated as it is a popular package currently there are a handful of requests to add the bit version and i have not been able to field these requests i will help as much as i can to transfer ownership | 1 |
677,897 | 23,179,410,626 | IssuesEvent | 2022-07-31 22:28:37 | ls1intum/Artemis | https://api.github.com/repos/ls1intum/Artemis | closed | Students can see plagiarism accusations before they are notified | bug plagiarism detection priority:high | ### Describe the bug
In the EiSt course we noticed, that some students could see a plagiarism accusation despite the fact that they hadn't been notified. The students can see the plagiarism case button but when they want to access the page they get the message that they aren't authorised to access it (see screenshots). The students shouldn't see the button at all because they haven't been notified.
### To Reproduce
1. Create an exercise
2. Submit at least 2 similar solutions
3. Run the plagiarism check for this exercise
4. Flag the matches as plagiarism
5. Log into a student account and go to the exercise
6. Click on the button and access the page
### Expected behavior
Normaly the student shouldn't get notified before an instructor notifies him over the plagiarism cases interface. The plagiarism button and the page shouldn't be visible to the student until then.
### Screenshots


### What browsers are you seeing the problem on?
Chrome
### Additional context
_No response_
### Relevant log output
_No response_ | 1.0 | Students can see plagiarism accusations before they are notified - ### Describe the bug
In the EiSt course we noticed, that some students could see a plagiarism accusation despite the fact that they hadn't been notified. The students can see the plagiarism case button but when they want to access the page they get the message that they aren't authorised to access it (see screenshots). The students shouldn't see the button at all because they haven't been notified.
### To Reproduce
1. Create an exercise
2. Submit at least 2 similar solutions
3. Run the plagiarism check for this exercise
4. Flag the matches as plagiarism
5. Log into a student account and go to the exercise
6. Click on the button and access the page
### Expected behavior
Normaly the student shouldn't get notified before an instructor notifies him over the plagiarism cases interface. The plagiarism button and the page shouldn't be visible to the student until then.
### Screenshots


### What browsers are you seeing the problem on?
Chrome
### Additional context
_No response_
### Relevant log output
_No response_ | non_main | students can see plagiarism accusations before they are notified describe the bug in the eist course we noticed that some students could see a plagiarism accusation despite the fact that they hadn t been notified the students can see the plagiarism case button but when they want to access the page they get the message that they aren t authorised to access it see screenshots the students shouldn t see the button at all because they haven t been notified to reproduce create an exercise submit at least similar solutions run the plagiarism check for this exercise flag the matches as plagiarism log into a student account and go to the exercise click on the button and access the page expected behavior normaly the student shouldn t get notified before an instructor notifies him over the plagiarism cases interface the plagiarism button and the page shouldn t be visible to the student until then screenshots what browsers are you seeing the problem on chrome additional context no response relevant log output no response | 0 |
449,152 | 31,830,460,943 | IssuesEvent | 2023-09-14 10:14:07 | ueberdosis/tiptap | https://api.github.com/repos/ueberdosis/tiptap | opened | [Documentation]: | Type: Documentation Category: Open Source | ### What’s the URL to the page you’re sending feedback for?
https://tiptap.dev/api/utilities/suggestion
### What part of the documentation needs improvement?
https://tiptap.dev/api/utilities/suggestion
### What is helpful about that part?
It is helpful to know that we can allow spaces without closing the mentioning functionality
### What is hard to understand, missing or misleading?
It is missing to explain that it also allows you to add a second "@" char in the string without closing the mention. Without it, it was not possible to search by email, as typing "@sara@" would close mentions on that last char input.
### Anything to add? (optional)
I dont know if that was intentional or unintentional, but it was an issue that I encountered with it so really glad that it can get fixed that way! | 1.0 | [Documentation]: - ### What’s the URL to the page you’re sending feedback for?
https://tiptap.dev/api/utilities/suggestion
### What part of the documentation needs improvement?
https://tiptap.dev/api/utilities/suggestion
### What is helpful about that part?
It is helpful to know that we can allow spaces without closing the mentioning functionality
### What is hard to understand, missing or misleading?
It is missing to explain that it also allows you to add a second "@" char in the string without closing the mention. Without it, it was not possible to search by email, as typing "@sara@" would close mentions on that last char input.
### Anything to add? (optional)
I dont know if that was intentional or unintentional, but it was an issue that I encountered with it so really glad that it can get fixed that way! | non_main | what’s the url to the page you’re sending feedback for what part of the documentation needs improvement what is helpful about that part it is helpful to know that we can allow spaces without closing the mentioning functionality what is hard to understand missing or misleading it is missing to explain that it also allows you to add a second char in the string without closing the mention without it it was not possible to search by email as typing sara would close mentions on that last char input anything to add optional i dont know if that was intentional or unintentional but it was an issue that i encountered with it so really glad that it can get fixed that way | 0 |
102,938 | 12,832,496,038 | IssuesEvent | 2020-07-07 07:46:29 | Blazored/Modal | https://api.github.com/repos/Blazored/Modal | closed | Support Bootstrap/custom modal markup | Feature Request Needs: Design | Modify the BlazoredModalInstance.razor component to support bootstrap, and other templates, when emitting HTML markup.
I suggest creating a Modal.Framework enumeration that includes:
Blazored,
Bootstrap,
Custom,
etc.
When adding the BlazoredModal component to the MainLayout, the user can supply this value as an attribute (i.e. `<BlazoredModal Framework="Bootstrap">` ) to configure all modals to render with their specified framework; Framework.Blazored could continue as the default preventing any breaking changes to existing implementations.
When the Modal.Framework has been specified, the BlazorModalInstance.razor component would emit bootstrap modal HTML elements with the appropriate structure, layout and css classes, instead of the default markup as currently implemented.
Also, a method can be provided to supply custom Markup for the modal, allowing for specialized layout and HTML for more advanced layouts.
Alternatively, fork the branch and create bootstrap compatible markup and styles.
| 1.0 | Support Bootstrap/custom modal markup - Modify the BlazoredModalInstance.razor component to support bootstrap, and other templates, when emitting HTML markup.
I suggest creating a Modal.Framework enumeration that includes:
Blazored,
Bootstrap,
Custom,
etc.
When adding the BlazoredModal component to the MainLayout, the user can supply this value as an attribute (i.e. `<BlazoredModal Framework="Bootstrap">` ) to configure all modals to render with their specified framework; Framework.Blazored could continue as the default preventing any breaking changes to existing implementations.
When the Modal.Framework has been specified, the BlazorModalInstance.razor component would emit bootstrap modal HTML elements with the appropriate structure, layout and css classes, instead of the default markup as currently implemented.
Also, a method can be provided to supply custom Markup for the modal, allowing for specialized layout and HTML for more advanced layouts.
Alternatively, fork the branch and create bootstrap compatible markup and styles.
| non_main | support bootstrap custom modal markup modify the blazoredmodalinstance razor component to support bootstrap and other templates when emitting html markup i suggest creating a modal framework enumeration that includes blazored bootstrap custom etc when adding the blazoredmodal component to the mainlayout the user can supply this value as an attribute i e to configure all modals to render with their specified framework framework blazored could continue as the default preventing any breaking changes to existing implementations when the modal framework has been specified the blazormodalinstance razor component would emit bootstrap modal html elements with the appropriate structure layout and css classes instead of the default markup as currently implemented also a method can be provided to supply custom markup for the modal allowing for specialized layout and html for more advanced layouts alternatively fork the branch and create bootstrap compatible markup and styles | 0 |
809,912 | 30,217,358,672 | IssuesEvent | 2023-07-05 16:31:14 | bloom-works/handbook | https://api.github.com/repos/bloom-works/handbook | opened | Add content: Information for contractors | priority-2 content-new | ## Where to find the content
https://docs.google.com/document/d/1IWmW3pYjCnryU8rEQeenjtBX7VtvQeL-4AfVJHcyPXc/edit#heading=h.6iujc7r4hlx
This issue includes subsections, but might consider breaking each subsection into its own issue:
- Onboarding
- Paperwork
- How to log time
- Travel on behalf of Bloom
- Offboarding
## Where the content will go
Top level, after 'Working partners and coalitions'
## Person responsible for this content
@dottiebobottie (confirm)
## Approver
_who will approve it or already did // we'll also track this via this github workflow_
| 1.0 | Add content: Information for contractors - ## Where to find the content
https://docs.google.com/document/d/1IWmW3pYjCnryU8rEQeenjtBX7VtvQeL-4AfVJHcyPXc/edit#heading=h.6iujc7r4hlx
This issue includes subsections, but might consider breaking each subsection into its own issue:
- Onboarding
- Paperwork
- How to log time
- Travel on behalf of Bloom
- Offboarding
## Where the content will go
Top level, after 'Working partners and coalitions'
## Person responsible for this content
@dottiebobottie (confirm)
## Approver
_who will approve it or already did // we'll also track this via this github workflow_
| non_main | add content information for contractors where to find the content this issue includes subsections but might consider breaking each subsection into its own issue onboarding paperwork how to log time travel on behalf of bloom offboarding where the content will go top level after working partners and coalitions person responsible for this content dottiebobottie confirm approver who will approve it or already did we ll also track this via this github workflow | 0 |
120,896 | 15,819,573,363 | IssuesEvent | 2021-04-05 17:41:53 | fluxcd/flux | https://api.github.com/repos/fluxcd/flux | closed | Deal with multiple tags referring to the same image layer, and moving tags | blocked-design question vague | Many library images have tags that track e.g., major versions. That means any given image pushed will have two or three tags that refer to it; e.g., `{alpine:2, alpine:2.3, alpine:2.3.4}`.
We do in general know that these are the same, since the manifests refer to the image layer hash.
By the same token, we could know when e.g., a `":latest"` image is different, since we can check the image layer hash from Kubernetes (or given in the manifest), against the one we see in the image registry.
| 1.0 | Deal with multiple tags referring to the same image layer, and moving tags - Many library images have tags that track e.g., major versions. That means any given image pushed will have two or three tags that refer to it; e.g., `{alpine:2, alpine:2.3, alpine:2.3.4}`.
We do in general know that these are the same, since the manifests refer to the image layer hash.
By the same token, we could know when e.g., a `":latest"` image is different, since we can check the image layer hash from Kubernetes (or given in the manifest), against the one we see in the image registry.
| non_main | deal with multiple tags referring to the same image layer and moving tags many library images have tags that track e g major versions that means any given image pushed will have two or three tags that refer to it e g alpine alpine alpine we do in general know that these are the same since the manifests refer to the image layer hash by the same token we could know when e g a latest image is different since we can check the image layer hash from kubernetes or given in the manifest against the one we see in the image registry | 0 |
330,826 | 28,487,240,929 | IssuesEvent | 2023-04-18 08:43:06 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/group3/rollup·ts - lens app - group 3 lens rollup tests should allow seamless transition to and from table view | Team:Visualizations failed-test Feature:Lens | A test failed on a tracked branch
```
Error: timed out waiting for assertExpectedText -- last error: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="metric_label"])
Wait timed out after 10052ms
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-cd690b02a97aedf7/elastic/kibana-on-merge/kibana/node_modules/selenium-webdriver/lib/webdriver.js:929:17
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at onFailure (retry_for_truthy.ts:39:13)
at retryForSuccess (retry_for_success.ts:59:13)
at retryForTruthy (retry_for_truthy.ts:27:3)
at RetryService.waitForWithTimeout (retry.ts:45:5)
at Object.assertExpectedText (lens_page.ts:85:7)
at Object.assertLegacyMetric (lens_page.ts:1181:7)
at Context.<anonymous> (rollup.ts:76:7)
at Object.apply (wrap_function.js:73:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/28977#01878f5c-4722-4f05-8d9c-d144e7962604)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/group3/rollup·ts","test.name":"lens app - group 3 lens rollup tests should allow seamless transition to and from table view","test.failCount":1}} --> | 1.0 | Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/group3/rollup·ts - lens app - group 3 lens rollup tests should allow seamless transition to and from table view - A test failed on a tracked branch
```
Error: timed out waiting for assertExpectedText -- last error: TimeoutError: Waiting for element to be located By(css selector, [data-test-subj="metric_label"])
Wait timed out after 10052ms
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-cd690b02a97aedf7/elastic/kibana-on-merge/kibana/node_modules/selenium-webdriver/lib/webdriver.js:929:17
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at onFailure (retry_for_truthy.ts:39:13)
at retryForSuccess (retry_for_success.ts:59:13)
at retryForTruthy (retry_for_truthy.ts:27:3)
at RetryService.waitForWithTimeout (retry.ts:45:5)
at Object.assertExpectedText (lens_page.ts:85:7)
at Object.assertLegacyMetric (lens_page.ts:1181:7)
at Context.<anonymous> (rollup.ts:76:7)
at Object.apply (wrap_function.js:73:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/28977#01878f5c-4722-4f05-8d9c-d144e7962604)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/group3/rollup·ts","test.name":"lens app - group 3 lens rollup tests should allow seamless transition to and from table view","test.failCount":1}} --> | non_main | failing test chrome x pack ui functional tests x pack test functional apps lens rollup·ts lens app group lens rollup tests should allow seamless transition to and from table view a test failed on a tracked branch error timed out waiting for assertexpectedtext last error timeouterror waiting for element to be located by css selector wait timed out after at var lib buildkite agent builds kb spot elastic kibana on merge kibana node modules selenium webdriver lib webdriver js at runmicrotasks at processticksandrejections node internal process task queues at onfailure retry for truthy ts at retryforsuccess retry for success ts at retryfortruthy retry for truthy ts at retryservice waitforwithtimeout retry ts at object assertexpectedtext lens page ts at object assertlegacymetric lens page ts at context rollup ts at object apply wrap function js first failure | 0 |
742,526 | 25,860,036,099 | IssuesEvent | 2022-12-13 16:18:24 | oceanprotocol/df-py | https://api.github.com/repos/oceanprotocol/df-py | closed | Create route/feed for allocations in purgatory. | Priority: Mid | ### Problem:
Right now, there is no way to view the items in purgatory that the user has allocated to.
This keeps the user from being able to adjust or reset their allocation.
### Candidate Solutions
Cand A
- Show items from purgatory where the user has allocated to, and force them to reallocate
- Users should then be able to reallocate away from these assets, and call "Update Allocations" once
Cand B
- Do not show items from purgatory, but create a "Reset Allocations" button that enables the user to reset their allocations
- This requires 2 txs to reset. 1 for the reset. 2 for the allocation.
### DoD:
- [ ] BE is able to get required purgatory + allocation data in order to show to user, and have them reallocate | 1.0 | Create route/feed for allocations in purgatory. - ### Problem:
Right now, there is no way to view the items in purgatory that the user has allocated to.
This keeps the user from being able to adjust or reset their allocation.
### Candidate Solutions
Cand A
- Show items from purgatory where the user has allocated to, and force them to reallocate
- Users should then be able to reallocate away from these assets, and call "Update Allocations" once
Cand B
- Do not show items from purgatory, but create a "Reset Allocations" button that enables the user to reset their allocations
- This requires 2 txs to reset. 1 for the reset. 2 for the allocation.
### DoD:
- [ ] BE is able to get required purgatory + allocation data in order to show to user, and have them reallocate | non_main | create route feed for allocations in purgatory problem right now there is no way to view the items in purgatory that the user has allocated to this keeps the user from being able to adjust or reset their allocation candidate solutions cand a show items from purgatory where the user has allocated to and force them to reallocate users should then be able to reallocate away from these assets and call update allocations once cand b do not show items from purgatory but create a reset allocations button that enables the user to reset their allocations this requires txs to reset for the reset for the allocation dod be is able to get required purgatory allocation data in order to show to user and have them reallocate | 0 |
17,483 | 24,096,606,188 | IssuesEvent | 2022-09-19 19:21:21 | dotnet/docs | https://api.github.com/repos/dotnet/docs | closed | [Breaking change]: Socket.End methods no longer throw ObjectDisposedException | doc-idea breaking-change Pri1 binary incompatible :checkered_flag: Release: .NET 7 in-pr | ### Description
Starting from .NET 7, `System.Net.Sockets.Socket.End*` methods (e.g. `EndSend`) throw `SocketException` instead of `ObjectDisposedExcption` if the socket is closed.
### Version
.NET 7 Preview 7
### Previous behavior
`System.Net.Sockets.Socket.End*` methods were throwing `ObjectDisposedException` for closed sockets.
### New behavior
`System.Net.Sockets.Socket.End*` methods throw `SocketException` with `SocketErrorCode` set to `SocketError.OperationAborted`.
### Type of breaking change
- [X] **Binary incompatible**: Existing binaries may encounter a breaking change in behavior, such as failure to load/execute or different run-time behavior.
- [ ] **Source incompatible**: Source code may encounter a breaking change in behavior when targeting the new runtime/component/SDK, such as compile errors or different run-time behavior.
### Reason for change
Starting with .NET 6.0, the legacy Socket APM (Begin/End) APIs are backed with Task based implementation as part of our effort to consolidate and simplify the Socket codebase. Unfortunately, it turned out that the 6.0 implementation was leaking [unobserved](https://docs.microsoft.com/en-us/dotnet/api/system.threading.tasks.taskscheduler.unobservedtaskexception?view=net-6.0) `SocketException`-s [even when used correctly](https://github.com/dotnet/runtime/issues/61411#issuecomment-968585218) (meaning that the user code makes sure that the End methods are always invoked, including the case when the socket is closed).
Changing the behavior was a simple a straightforward way to make sure that no unobserved exceptions are leaked in such cases.
### Recommended action
All code that expects (catches) `ObjectDisposedException` thrown from any of the `Socket.End*` methods should be changed to expect `SocketException` and refer to `SocketException.SocketErrorCode` to query the underlying reason.
Note: As mentioned earlier, APM code should *always* make sure that the corresponding End methods are invoked after the Begin methods, even if the socket is closed.
### Feature area
Networking
### Affected APIs
System.Net.Sockets.Socket.EndConnect
System.Net.Sockets.Socket.EndDisconnect
System.Net.Sockets.Socket.EndSend
System.Net.Sockets.Socket.EndSendFile
System.Net.Sockets.Socket.EndSendTo
System.Net.Sockets.Socket.EndReceive
System.Net.Sockets.Socket.EndAccept | True | [Breaking change]: Socket.End methods no longer throw ObjectDisposedException - ### Description
Starting from .NET 7, `System.Net.Sockets.Socket.End*` methods (e.g. `EndSend`) throw `SocketException` instead of `ObjectDisposedExcption` if the socket is closed.
### Version
.NET 7 Preview 7
### Previous behavior
`System.Net.Sockets.Socket.End*` methods were throwing `ObjectDisposedException` for closed sockets.
### New behavior
`System.Net.Sockets.Socket.End*` methods throw `SocketException` with `SocketErrorCode` set to `SocketError.OperationAborted`.
### Type of breaking change
- [X] **Binary incompatible**: Existing binaries may encounter a breaking change in behavior, such as failure to load/execute or different run-time behavior.
- [ ] **Source incompatible**: Source code may encounter a breaking change in behavior when targeting the new runtime/component/SDK, such as compile errors or different run-time behavior.
### Reason for change
Starting with .NET 6.0, the legacy Socket APM (Begin/End) APIs are backed with Task based implementation as part of our effort to consolidate and simplify the Socket codebase. Unfortunately, it turned out that the 6.0 implementation was leaking [unobserved](https://docs.microsoft.com/en-us/dotnet/api/system.threading.tasks.taskscheduler.unobservedtaskexception?view=net-6.0) `SocketException`-s [even when used correctly](https://github.com/dotnet/runtime/issues/61411#issuecomment-968585218) (meaning that the user code makes sure that the End methods are always invoked, including the case when the socket is closed).
Changing the behavior was a simple a straightforward way to make sure that no unobserved exceptions are leaked in such cases.
### Recommended action
All code that expects (catches) `ObjectDisposedException` thrown from any of the `Socket.End*` methods should be changed to expect `SocketException` and refer to `SocketException.SocketErrorCode` to query the underlying reason.
Note: As mentioned earlier, APM code should *always* make sure that the corresponding End methods are invoked after the Begin methods, even if the socket is closed.
### Feature area
Networking
### Affected APIs
System.Net.Sockets.Socket.EndConnect
System.Net.Sockets.Socket.EndDisconnect
System.Net.Sockets.Socket.EndSend
System.Net.Sockets.Socket.EndSendFile
System.Net.Sockets.Socket.EndSendTo
System.Net.Sockets.Socket.EndReceive
System.Net.Sockets.Socket.EndAccept | non_main | socket end methods no longer throw objectdisposedexception description starting from net system net sockets socket end methods e g endsend throw socketexception instead of objectdisposedexcption if the socket is closed version net preview previous behavior system net sockets socket end methods were throwing objectdisposedexception for closed sockets new behavior system net sockets socket end methods throw socketexception with socketerrorcode set to socketerror operationaborted type of breaking change binary incompatible existing binaries may encounter a breaking change in behavior such as failure to load execute or different run time behavior source incompatible source code may encounter a breaking change in behavior when targeting the new runtime component sdk such as compile errors or different run time behavior reason for change starting with net the legacy socket apm begin end apis are backed with task based implementation as part of our effort to consolidate and simplify the socket codebase unfortunately it turned out that the implementation was leaking socketexception s meaning that the user code makes sure that the end methods are always invoked including the case when the socket is closed changing the behavior was a simple a straightforward way to make sure that no unobserved exceptions are leaked in such cases recommended action all code that expects catches objectdisposedexception thrown from any of the socket end methods should be changed to expect socketexception and refer to socketexception socketerrorcode to query the underlying reason note as mentioned earlier apm code should always make sure that the corresponding end methods are invoked after the begin methods even if the socket is closed feature area networking affected apis system net sockets socket endconnect system net sockets socket enddisconnect system net sockets socket endsend system net sockets socket endsendfile system net sockets socket endsendto system net sockets socket endreceive system net sockets socket endaccept | 0 |
3,208 | 12,243,732,383 | IssuesEvent | 2020-05-05 09:47:07 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | opened | sockets/posix: poll()/select() implementation should be factored out from sockets subsys into POSIX subsys | Maintainer area: POSIX | POSIX poll() was initially implemented in the scope of BSD Sockets API, and currently located in subsys/net/lib/sockets/ . It is however a generic POSIX facility, and was generalized to apply to arbitrary file descriptors (and objects represented by them). As we grow implementations of such objects (eventfd, unix domain sockets), dependency on networking sockets becomes a problem, as requires enabling unrelated config options and leads to pulling more code into user applications.
To address this, poll() (and by extension, select()) implementation should be migrated to lib/posix/.
| True | sockets/posix: poll()/select() implementation should be factored out from sockets subsys into POSIX subsys - POSIX poll() was initially implemented in the scope of BSD Sockets API, and currently located in subsys/net/lib/sockets/ . It is however a generic POSIX facility, and was generalized to apply to arbitrary file descriptors (and objects represented by them). As we grow implementations of such objects (eventfd, unix domain sockets), dependency on networking sockets becomes a problem, as requires enabling unrelated config options and leads to pulling more code into user applications.
To address this, poll() (and by extension, select()) implementation should be migrated to lib/posix/.
| main | sockets posix poll select implementation should be factored out from sockets subsys into posix subsys posix poll was initially implemented in the scope of bsd sockets api and currently located in subsys net lib sockets it is however a generic posix facility and was generalized to apply to arbitrary file descriptors and objects represented by them as we grow implementations of such objects eventfd unix domain sockets dependency on networking sockets becomes a problem as requires enabling unrelated config options and leads to pulling more code into user applications to address this poll and by extension select implementation should be migrated to lib posix | 1 |
420,620 | 12,240,267,805 | IssuesEvent | 2020-05-04 23:46:07 | Berkmann18/lh-avg | https://api.github.com/repos/Berkmann18/lh-avg | opened | LH JSON export | Priority: Low Type: Enhancement :bulb: | <!--
Thank you for suggesting an idea to make this project better!
Please fill in as much of the template below as you're able.
-->
**Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
Perhaps consider adding support reading LH exported JSON files by looking at the `'lhr-*.json'.categories.(performance|accessibility|best-practices|seo|pwa).score` (note: for the pwa stuff, looking at the `auditRefs` and doing some maths or simply adding support for `p/a/bp/seo/pwa` inputs)
**Describe alternatives you've considered**
Please describe alternative solutions or features you have considered.
Having the CLI accept JSON files which are _dumbed down_ versions of the LH JSON export.
| 1.0 | LH JSON export - <!--
Thank you for suggesting an idea to make this project better!
Please fill in as much of the template below as you're able.
-->
**Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
Perhaps consider adding support reading LH exported JSON files by looking at the `'lhr-*.json'.categories.(performance|accessibility|best-practices|seo|pwa).score` (note: for the pwa stuff, looking at the `auditRefs` and doing some maths or simply adding support for `p/a/bp/seo/pwa` inputs)
**Describe alternatives you've considered**
Please describe alternative solutions or features you have considered.
Having the CLI accept JSON files which are _dumbed down_ versions of the LH JSON export.
| non_main | lh json export thank you for suggesting an idea to make this project better please fill in as much of the template below as you re able is your feature request related to a problem please describe no describe the solution you d like perhaps consider adding support reading lh exported json files by looking at the lhr json categories performance accessibility best practices seo pwa score note for the pwa stuff looking at the auditrefs and doing some maths or simply adding support for p a bp seo pwa inputs describe alternatives you ve considered please describe alternative solutions or features you have considered having the cli accept json files which are dumbed down versions of the lh json export | 0 |
363,218 | 10,739,522,499 | IssuesEvent | 2019-10-29 16:30:15 | acl-services/paprika | https://api.github.com/repos/acl-services/paprika | closed | Date Picker Positioning bug | Bug 🐞 High Priority ↑ | # Bug Report
## Prerequisites
**This is just a checklist, please delete this section after**
Please answer the following questions for yourself before submitting a bug report.
- [ ] I checked to make sure that this bug has not already been filed
- [ ] I reproduced the bug in the latest version of paprika
- [ ] Please label as, low / medium / high priority
*Please fill in as much detail as possible below*
## Expected behavior
Date Picker popup positioning needs to be able to position itself correctly for example when it is at the bottom of a page or scrollable area.


## Current behavior
*What is the current behavior?*
## Screenshots / Gifs / Codepens
*Include media to help illustrate the bug. Gifs help a lot!*
## Additional context
*Please provide any more additional details to illustrate the bug*
| 1.0 | Date Picker Positioning bug - # Bug Report
## Prerequisites
**This is just a checklist, please delete this section after**
Please answer the following questions for yourself before submitting a bug report.
- [ ] I checked to make sure that this bug has not already been filed
- [ ] I reproduced the bug in the latest version of paprika
- [ ] Please label as, low / medium / high priority
*Please fill in as much detail as possible below*
## Expected behavior
Date Picker popup positioning needs to be able to position itself correctly for example when it is at the bottom of a page or scrollable area.


## Current behavior
*What is the current behavior?*
## Screenshots / Gifs / Codepens
*Include media to help illustrate the bug. Gifs help a lot!*
## Additional context
*Please provide any more additional details to illustrate the bug*
| non_main | date picker positioning bug bug report prerequisites this is just a checklist please delete this section after please answer the following questions for yourself before submitting a bug report i checked to make sure that this bug has not already been filed i reproduced the bug in the latest version of paprika please label as low medium high priority please fill in as much detail as possible below expected behavior date picker popup positioning needs to be able to position itself correctly for example when it is at the bottom of a page or scrollable area current behavior what is the current behavior screenshots gifs codepens include media to help illustrate the bug gifs help a lot additional context please provide any more additional details to illustrate the bug | 0 |
3,315 | 12,833,941,761 | IssuesEvent | 2020-07-07 10:09:05 | spack/spack | https://api.github.com/repos/spack/spack | opened | Remove explicit version enumeration in "containerize" related code | feature maintainers | As a maintainer I want to remove the explicit enumeration of Spack versions in:
- https://github.com/spack/spack/blob/develop/lib/spack/spack/container/images.json
- https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/container.py
so that there will be one place less to update when cutting a new release.
### Rationale
Recently the release process has been documented, with TODO for improvement on the overall process:
https://github.com/spack/spack/blob/9ec9327f5aacc7b62a1469771c8917547393676d/lib/spack/docs/developer_guide.rst#L621-L626
### Description
A proper solution might need some discussion and might involve:
- Computing the versions that are currently in `images.json` dynamically (by querying Dockerhub?)
- Move the check on the YAML file from the schema to a later dynamic check.
This section will be updated as the discussion on this issue progresses.
### Additional information
```console
$ spack --version
0.15.0-62-d65a076c0
```
### General information
- [x] I have run `spack --version` and reported the version of Spack
- [x] I have searched the issues of this repo and believe this is not a duplicate
<!--If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
--> | True | Remove explicit version enumeration in "containerize" related code - As a maintainer I want to remove the explicit enumeration of Spack versions in:
- https://github.com/spack/spack/blob/develop/lib/spack/spack/container/images.json
- https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/container.py
so that there will be one place less to update when cutting a new release.
### Rationale
Recently the release process has been documented, with TODO for improvement on the overall process:
https://github.com/spack/spack/blob/9ec9327f5aacc7b62a1469771c8917547393676d/lib/spack/docs/developer_guide.rst#L621-L626
### Description
A proper solution might need some discussion and might involve:
- Computing the versions that are currently in `images.json` dynamically (by querying Dockerhub?)
- Move the check on the YAML file from the schema to a later dynamic check.
This section will be updated as the discussion on this issue progresses.
### Additional information
```console
$ spack --version
0.15.0-62-d65a076c0
```
### General information
- [x] I have run `spack --version` and reported the version of Spack
- [x] I have searched the issues of this repo and believe this is not a duplicate
<!--If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
--> | main | remove explicit version enumeration in containerize related code as a maintainer i want to remove the explicit enumeration of spack versions in so that there will be one place less to update when cutting a new release rationale recently the release process has been documented with todo for improvement on the overall process description a proper solution might need some discussion and might involve computing the versions that are currently in images json dynamically by querying dockerhub move the check on the yaml file from the schema to a later dynamic check this section will be updated as the discussion on this issue progresses additional information console spack version general information i have run spack version and reported the version of spack i have searched the issues of this repo and believe this is not a duplicate if you want to ask a question about the tool how to use it what it can currently do etc try the general channel on our slack first we have a welcoming community and chances are you ll get your reply faster and without opening an issue other than that thanks for taking the time to contribute to spack | 1 |
46,920 | 24,783,220,833 | IssuesEvent | 2022-10-24 07:39:38 | getsentry/sentry-javascript | https://api.github.com/repos/getsentry/sentry-javascript | closed | Upgrade to web-vitals v3 | Type: Improvement Feature: Performance Package: tracing Status: Backlog | ### Problem Statement
Web Vitals v3 is currently in beta.
Allows us to start tracking INP, which can replace FID.
https://web.dev/inp/
> Writing your own [PerformanceObserver](https://developer.mozilla.org/docs/Web/API/PerformanceObserver) to measure INP can be difficult. To measure INP in JavaScript, it's advised that you use the web-vitals JavaScript library, which exports an onINP function to do this work for you. At the moment, getting INP data is only possible in version 3 of web-vitals, currently in beta, which can be installed with the following command:
> ```js
> npm install web-vitals@next --save
> ```
> You can then get a page's INP by passing a function to the onINP method:
> ```js
> import {onINP} from 'web-vitals';
> onINP(({value}) => {
> // Log the value to the console, or send it to your analytics provider.
> console.log(value);
> });
> ```
> As with other methods exported by web-vitals, onINP accepts a function as an argument, and will pass metric data to the function you give it. From there, you can send that data to an endpoint for collection and analysis.
### Solution Brainstorm
https://www.npmjs.com/package/web-vitals/v/next
https://github.com/GoogleChrome/web-vitals/compare/next
Seems like its around ~2kb larger. Can we address that? | True | Upgrade to web-vitals v3 - ### Problem Statement
Web Vitals v3 is currently in beta.
Allows us to start tracking INP, which can replace FID.
https://web.dev/inp/
> Writing your own [PerformanceObserver](https://developer.mozilla.org/docs/Web/API/PerformanceObserver) to measure INP can be difficult. To measure INP in JavaScript, it's advised that you use the web-vitals JavaScript library, which exports an onINP function to do this work for you. At the moment, getting INP data is only possible in version 3 of web-vitals, currently in beta, which can be installed with the following command:
> ```js
> npm install web-vitals@next --save
> ```
> You can then get a page's INP by passing a function to the onINP method:
> ```js
> import {onINP} from 'web-vitals';
> onINP(({value}) => {
> // Log the value to the console, or send it to your analytics provider.
> console.log(value);
> });
> ```
> As with other methods exported by web-vitals, onINP accepts a function as an argument, and will pass metric data to the function you give it. From there, you can send that data to an endpoint for collection and analysis.
### Solution Brainstorm
https://www.npmjs.com/package/web-vitals/v/next
https://github.com/GoogleChrome/web-vitals/compare/next
Seems like its around ~2kb larger. Can we address that? | non_main | upgrade to web vitals problem statement web vitals is currently in beta allows us to start tracking inp which can replace fid writing your own to measure inp can be difficult to measure inp in javascript it s advised that you use the web vitals javascript library which exports an oninp function to do this work for you at the moment getting inp data is only possible in version of web vitals currently in beta which can be installed with the following command js npm install web vitals next save you can then get a page s inp by passing a function to the oninp method js import oninp from web vitals oninp value log the value to the console or send it to your analytics provider console log value as with other methods exported by web vitals oninp accepts a function as an argument and will pass metric data to the function you give it from there you can send that data to an endpoint for collection and analysis solution brainstorm seems like its around larger can we address that | 0 |
273,028 | 20,767,870,228 | IssuesEvent | 2022-03-15 23:03:56 | jennaanderson00/taskinator | https://api.github.com/repos/jennaanderson00/taskinator | closed | Initial Setup | documentation | #### Requirements
* Create the task tracking HTML page that needs a:
* Header
* Main content area for the task list
* Footer
* Use the style sheet provided
* Add functionality to the button to add tasks to the list | 1.0 | Initial Setup - #### Requirements
* Create the task tracking HTML page that needs a:
* Header
* Main content area for the task list
* Footer
* Use the style sheet provided
* Add functionality to the button to add tasks to the list | non_main | initial setup requirements create the task tracking html page that needs a header main content area for the task list footer use the style sheet provided add functionality to the button to add tasks to the list | 0 |
597,192 | 18,157,455,443 | IssuesEvent | 2021-09-27 04:53:38 | PazerOP/tf2_bot_detector | https://api.github.com/repos/PazerOP/tf2_bot_detector | closed | Keybind request | Type: Enhancement Priority: Low | **A clear and concise description of what the problem is.**
> When using and checking out player cheaters, I dislike having the chat warnings spam, and it takes a minute to alt-tab due to fullscreen. Also, when if you're with a friend and you just want it off for the moment without having to alt-tab.
**Describe the solution you'd like**
> The ability the bind a combination of keys to toggle things like chat warnings, auto votekick, and automark. They could be blank by default.
**Describe alternatives you've considered**
> None, I honestly think keybinds, even optional ones, are always useful.
**Additional context**
> None really.
| 1.0 | Keybind request - **A clear and concise description of what the problem is.**
> When using and checking out player cheaters, I dislike having the chat warnings spam, and it takes a minute to alt-tab due to fullscreen. Also, when if you're with a friend and you just want it off for the moment without having to alt-tab.
**Describe the solution you'd like**
> The ability the bind a combination of keys to toggle things like chat warnings, auto votekick, and automark. They could be blank by default.
**Describe alternatives you've considered**
> None, I honestly think keybinds, even optional ones, are always useful.
**Additional context**
> None really.
| non_main | keybind request a clear and concise description of what the problem is when using and checking out player cheaters i dislike having the chat warnings spam and it takes a minute to alt tab due to fullscreen also when if you re with a friend and you just want it off for the moment without having to alt tab describe the solution you d like the ability the bind a combination of keys to toggle things like chat warnings auto votekick and automark they could be blank by default describe alternatives you ve considered none i honestly think keybinds even optional ones are always useful additional context none really | 0 |
843 | 4,489,398,038 | IssuesEvent | 2016-08-30 10:51:48 | Particular/EndToEnd | https://api.github.com/repos/Particular/EndToEnd | reopened | Upgrade .NET Framework to latest release on VM's | Project: PerfTests State: In Progress - Maintainer Prio |
The VM's are not running any of the latest patch releases 4.6.1 or 4.6.2 of the .NET Framework.
Version 4.6.1. has documented numerous performance improvements which might have an impact on the results. The release notes of 4.6.2 do not state any performance improvements.
.NET Framework 4.6.1
- WPF improvements for spell check, support for per-user custom dictionaries and improved touch performance.
- Enhanced support for Elliptic Curve Digital Signature Algorithm (ECDSA) X509 certificates.
- Added support in SQL Connectivity for AlwaysOn, Always Encrypted and improved connection open resiliency when connecting to Azure SQL Database.
Azure SQL Database now supports distributed transactions using the updated System.Transactions APIs .
- Many other performance, stability, and reliability related fixes in RyuJIT, GC, WPF and WCF.
.NET Framework 4.6.2
- Support for paths longer than 260 characters
- Support for FIPS 186-3 DSA in X.509 certificates
- TLS 1.1/1.2 support for ClickOnce
- Support for localization of data annotations in ASP.NET
- Enabling .NET desktop apps with Project Centennial
- Soft keyboard and per-monitor DPI support for WPF
| True | Upgrade .NET Framework to latest release on VM's -
The VM's are not running any of the latest patch releases 4.6.1 or 4.6.2 of the .NET Framework.
Version 4.6.1. has documented numerous performance improvements which might have an impact on the results. The release notes of 4.6.2 do not state any performance improvements.
.NET Framework 4.6.1
- WPF improvements for spell check, support for per-user custom dictionaries and improved touch performance.
- Enhanced support for Elliptic Curve Digital Signature Algorithm (ECDSA) X509 certificates.
- Added support in SQL Connectivity for AlwaysOn, Always Encrypted and improved connection open resiliency when connecting to Azure SQL Database.
Azure SQL Database now supports distributed transactions using the updated System.Transactions APIs .
- Many other performance, stability, and reliability related fixes in RyuJIT, GC, WPF and WCF.
.NET Framework 4.6.2
- Support for paths longer than 260 characters
- Support for FIPS 186-3 DSA in X.509 certificates
- TLS 1.1/1.2 support for ClickOnce
- Support for localization of data annotations in ASP.NET
- Enabling .NET desktop apps with Project Centennial
- Soft keyboard and per-monitor DPI support for WPF
| main | upgrade net framework to latest release on vm s the vm s are not running any of the latest patch releases or of the net framework version has documented numerous performance improvements which might have an impact on the results the release notes of do not state any performance improvements net framework wpf improvements for spell check support for per user custom dictionaries and improved touch performance enhanced support for elliptic curve digital signature algorithm ecdsa certificates added support in sql connectivity for alwayson always encrypted and improved connection open resiliency when connecting to azure sql database azure sql database now supports distributed transactions using the updated system transactions apis many other performance stability and reliability related fixes in ryujit gc wpf and wcf net framework support for paths longer than characters support for fips dsa in x certificates tls support for clickonce support for localization of data annotations in asp net enabling net desktop apps with project centennial soft keyboard and per monitor dpi support for wpf | 1 |
429 | 3,520,869,219 | IssuesEvent | 2016-01-12 22:40:25 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | opened | Proposal: change `appcast` `:sha245` to `:checkpoint` | awaiting maintainer feedback | Refs https://github.com/caskroom/homebrew-cask/pull/16948. If agreed upon, steps:
- [ ] Deprecate `:sha256` in `appcast`.
- [ ] Add `:checkpoint` (or other name) to `appcast`.
- [ ] Add [sourceforge `appcast`s](https://github.com/caskroom/homebrew-cask/issues/16685).
- [ ] Update documentation.
- [ ] Update details in https://github.com/caskroom/homebrew-cask/issues/16529.
- [ ] Update details in https://github.com/caskroom/homebrew-cask/issues/16689. | True | Proposal: change `appcast` `:sha245` to `:checkpoint` - Refs https://github.com/caskroom/homebrew-cask/pull/16948. If agreed upon, steps:
- [ ] Deprecate `:sha256` in `appcast`.
- [ ] Add `:checkpoint` (or other name) to `appcast`.
- [ ] Add [sourceforge `appcast`s](https://github.com/caskroom/homebrew-cask/issues/16685).
- [ ] Update documentation.
- [ ] Update details in https://github.com/caskroom/homebrew-cask/issues/16529.
- [ ] Update details in https://github.com/caskroom/homebrew-cask/issues/16689. | main | proposal change appcast to checkpoint refs if agreed upon steps deprecate in appcast add checkpoint or other name to appcast add update documentation update details in update details in | 1 |
803,573 | 29,183,438,414 | IssuesEvent | 2023-05-19 13:41:28 | aleksbobic/csx | https://api.github.com/repos/aleksbobic/csx | opened | Show delete and expand in advanced search | enhancement priority:medium Complexity:medium | Delete and expand actions should be visible also in advanced search | 1.0 | Show delete and expand in advanced search - Delete and expand actions should be visible also in advanced search | non_main | show delete and expand in advanced search delete and expand actions should be visible also in advanced search | 0 |
263,778 | 28,056,680,990 | IssuesEvent | 2023-03-29 09:49:53 | tamirverthim/src | https://api.github.com/repos/tamirverthim/src | reopened | CVE-2017-7301 (High) detected in srcdc68203c791d61132fb706541d939ef160c2e638 | Mend: dependency security vulnerability | ## CVE-2017-7301 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>srcdc68203c791d61132fb706541d939ef160c2e638</b></p></summary>
<p>
<p>Public git conversion mirror of OpenBSD's official CVS src repository. Pull requests not accepted - send diffs to the tech@ mailing list.</p>
<p>Library home page: <a href=https://github.com/openbsd/src.git>https://github.com/openbsd/src.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/tamirverthim/src/commits/250560ac3a6cd973d828db0972dd561343848d2b">250560ac3a6cd973d828db0972dd561343848d2b</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/gnu/usr.bin/binutils-2.17/bfd/aoutx.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/gnu/usr.bin/binutils-2.17/bfd/aoutx.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/gnu/usr.bin/binutils-2.17/bfd/aoutx.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Binary File Descriptor (BFD) library (aka libbfd), as distributed in GNU Binutils 2.28, has an aout_link_add_symbols function in bfd/aoutx.h that has an off-by-one vulnerability because it does not carefully check the string offset. The vulnerability could lead to a GNU linker (ld) program crash.
<p>Publish Date: 2017-03-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-7301>CVE-2017-7301</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://sourceware.org/bugzilla/show_bug.cgi?id=20924">https://sourceware.org/bugzilla/show_bug.cgi?id=20924</a></p>
<p>Release Date: 2017-03-29</p>
<p>Fix Resolution: 2.28</p>
</p>
</details>
<p></p>
| True | CVE-2017-7301 (High) detected in srcdc68203c791d61132fb706541d939ef160c2e638 - ## CVE-2017-7301 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>srcdc68203c791d61132fb706541d939ef160c2e638</b></p></summary>
<p>
<p>Public git conversion mirror of OpenBSD's official CVS src repository. Pull requests not accepted - send diffs to the tech@ mailing list.</p>
<p>Library home page: <a href=https://github.com/openbsd/src.git>https://github.com/openbsd/src.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/tamirverthim/src/commits/250560ac3a6cd973d828db0972dd561343848d2b">250560ac3a6cd973d828db0972dd561343848d2b</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/gnu/usr.bin/binutils-2.17/bfd/aoutx.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/gnu/usr.bin/binutils-2.17/bfd/aoutx.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/gnu/usr.bin/binutils-2.17/bfd/aoutx.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Binary File Descriptor (BFD) library (aka libbfd), as distributed in GNU Binutils 2.28, has an aout_link_add_symbols function in bfd/aoutx.h that has an off-by-one vulnerability because it does not carefully check the string offset. The vulnerability could lead to a GNU linker (ld) program crash.
<p>Publish Date: 2017-03-29
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-7301>CVE-2017-7301</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://sourceware.org/bugzilla/show_bug.cgi?id=20924">https://sourceware.org/bugzilla/show_bug.cgi?id=20924</a></p>
<p>Release Date: 2017-03-29</p>
<p>Fix Resolution: 2.28</p>
</p>
</details>
<p></p>
| non_main | cve high detected in cve high severity vulnerability vulnerable library public git conversion mirror of openbsd s official cvs src repository pull requests not accepted send diffs to the tech mailing list library home page a href found in head commit a href vulnerable source files gnu usr bin binutils bfd aoutx h gnu usr bin binutils bfd aoutx h gnu usr bin binutils bfd aoutx h vulnerability details the binary file descriptor bfd library aka libbfd as distributed in gnu binutils has an aout link add symbols function in bfd aoutx h that has an off by one vulnerability because it does not carefully check the string offset the vulnerability could lead to a gnu linker ld program crash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
50,153 | 6,062,345,312 | IssuesEvent | 2017-06-14 09:12:04 | FreeRDP/FreeRDP | https://api.github.com/repos/FreeRDP/FreeRDP | closed | [MacOS-Shadow-Server] Trackpad always scrolls down | fixed-waiting-test | Server:
./server/shadow/freerdp-shadow-cli /version
FreeRDP version 2.0.0-dev (git 1dbd2d28d)
macOS Sierra 10.12.5
Client:
Microsoft Remote Desktop Client for MacOS 8.0.39
Problem: It doesn't matter if I try to scroll up or down from the client using the trackpad of my MacBook, the content always scrolls down. | 1.0 | [MacOS-Shadow-Server] Trackpad always scrolls down - Server:
./server/shadow/freerdp-shadow-cli /version
FreeRDP version 2.0.0-dev (git 1dbd2d28d)
macOS Sierra 10.12.5
Client:
Microsoft Remote Desktop Client for MacOS 8.0.39
Problem: It doesn't matter if I try to scroll up or down from the client using the trackpad of my MacBook, the content always scrolls down. | non_main | trackpad always scrolls down server server shadow freerdp shadow cli version freerdp version dev git macos sierra client microsoft remote desktop client for macos problem it doesn t matter if i try to scroll up or down from the client using the trackpad of my macbook the content always scrolls down | 0 |
56,478 | 11,584,474,333 | IssuesEvent | 2020-02-22 17:33:02 | T4g1/gd-wildjam18 | https://api.github.com/repos/T4g1/gd-wildjam18 | closed | Last confirmation sound is not played when finishing puzzle 1 | code priority: low wontfix | This as to do with how sound are handled. We need a global system like Game to handle audio playback. | 1.0 | Last confirmation sound is not played when finishing puzzle 1 - This as to do with how sound are handled. We need a global system like Game to handle audio playback. | non_main | last confirmation sound is not played when finishing puzzle this as to do with how sound are handled we need a global system like game to handle audio playback | 0 |
566 | 4,044,279,011 | IssuesEvent | 2016-05-21 07:15:20 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Rust Cargo Packages: triggering feedback from Reddit user | Maintainer Input Requested | A Reddit user left feedback on this IA [here](https://www.reddit.com/r/rust/comments/4gujbf/help_improve_duckduckgos_rustrelated_searches/d2l45d5):
> In particular, I notice that in the Rust Cargo Packages IA, it only links to the one with an exact matching name, which isn't very helpful if you're just looking for crates that deal with a topic without knowing the name. Not sure if IA's can provide a list of the top few matches, but a lot of times that would be more helpful. Also, it can be invoked with "cargo package" or "rust package", but not "cargo crate" or "rust crate", which is generally the terminology used in Rust.
------
IA Page: http://duck.co/ia/view/rust_cargo
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @TomBebbington | True | Rust Cargo Packages: triggering feedback from Reddit user - A Reddit user left feedback on this IA [here](https://www.reddit.com/r/rust/comments/4gujbf/help_improve_duckduckgos_rustrelated_searches/d2l45d5):
> In particular, I notice that in the Rust Cargo Packages IA, it only links to the one with an exact matching name, which isn't very helpful if you're just looking for crates that deal with a topic without knowing the name. Not sure if IA's can provide a list of the top few matches, but a lot of times that would be more helpful. Also, it can be invoked with "cargo package" or "rust package", but not "cargo crate" or "rust crate", which is generally the terminology used in Rust.
------
IA Page: http://duck.co/ia/view/rust_cargo
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @TomBebbington | main | rust cargo packages triggering feedback from reddit user a reddit user left feedback on this ia in particular i notice that in the rust cargo packages ia it only links to the one with an exact matching name which isn t very helpful if you re just looking for crates that deal with a topic without knowing the name not sure if ia s can provide a list of the top few matches but a lot of times that would be more helpful also it can be invoked with cargo package or rust package but not cargo crate or rust crate which is generally the terminology used in rust ia page tombebbington | 1 |
2,707 | 9,531,849,419 | IssuesEvent | 2019-04-29 17:01:47 | codestation/qcma | https://api.github.com/repos/codestation/qcma | closed | Error installing Qcma on Ubuntu 18.10 | unmaintained | This is the output i get every time:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'qcma' instead of './qcma_0.4.2_amd64.deb'
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
qcma : Depends: libavcodec57 (>= 7:3.4.2) but it is not installable or
libavcodec-extra57 (>= 7:3.4.2) but it is not installable
Depends: libavformat57 (>= 7:3.4.2) but it is not installable
Depends: libavutil55 (>= 7:3.4.2) but it is not installable
Depends: libswscale4 (>= 7:3.4.2) but it is not installable
E: Unable to correct problems, you have held broken packages. | True | Error installing Qcma on Ubuntu 18.10 - This is the output i get every time:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'qcma' instead of './qcma_0.4.2_amd64.deb'
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
qcma : Depends: libavcodec57 (>= 7:3.4.2) but it is not installable or
libavcodec-extra57 (>= 7:3.4.2) but it is not installable
Depends: libavformat57 (>= 7:3.4.2) but it is not installable
Depends: libavutil55 (>= 7:3.4.2) but it is not installable
Depends: libswscale4 (>= 7:3.4.2) but it is not installable
E: Unable to correct problems, you have held broken packages. | main | error installing qcma on ubuntu this is the output i get every time reading package lists done building dependency tree reading state information done note selecting qcma instead of qcma deb some packages could not be installed this may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of incoming the following information may help to resolve the situation the following packages have unmet dependencies qcma depends but it is not installable or libavcodec but it is not installable depends but it is not installable depends but it is not installable depends but it is not installable e unable to correct problems you have held broken packages | 1 |
4,207 | 20,679,494,949 | IssuesEvent | 2022-03-10 12:34:36 | BioArchLinux/Packages | https://api.github.com/repos/BioArchLinux/Packages | closed | [BUG] The ugene package does not install everything needed | maintain | OK this is annoying. I made the package based on my ugene-git package (which works) but apparently is the ugene package in BioArchLinux not installing everything (PREFIX has to be set on both "qmake" and "make install", but even that did not help). When upstream changed how things are supposed to be installed in ugene, it really messed up the package.
Sorry.
I will have to try to figure out what goes wrong in the release version of the package. I will do that sometime in the coming days. | True | [BUG] The ugene package does not install everything needed - OK this is annoying. I made the package based on my ugene-git package (which works) but apparently is the ugene package in BioArchLinux not installing everything (PREFIX has to be set on both "qmake" and "make install", but even that did not help). When upstream changed how things are supposed to be installed in ugene, it really messed up the package.
Sorry.
I will have to try to figure out what goes wrong in the release version of the package. I will do that sometime in the coming days. | main | the ugene package does not install everything needed ok this is annoying i made the package based on my ugene git package which works but apparently is the ugene package in bioarchlinux not installing everything prefix has to be set on both qmake and make install but even that did not help when upstream changed how things are supposed to be installed in ugene it really messed up the package sorry i will have to try to figure out what goes wrong in the release version of the package i will do that sometime in the coming days | 1 |
1,332 | 5,715,070,420 | IssuesEvent | 2017-04-19 12:12:06 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | asa_command loses first line for show version, possible prompt detection issues? | affects_2.2 bug_report networking waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
asa_command
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0 (stable-2.2 c920c8bc3b) last updated 2016/11/08 21:29:37 (GMT +000)
lib/ansible/modules/core: (detached HEAD 164225aa43) last updated 2016/11/08 21:29:47 (GMT +000)
lib/ansible/modules/extras: (detached HEAD 18bb736cc2) last updated 2016/11/08 21:29:55 (GMT +000)
config file = /root/.ansible.cfg
configured module search path = ['/modules/', '/vendor/f5-ansible/library']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
it appears the first line of `show version` when using the asa_command module is missing.
```
Cisco Adaptive Security Appliance Software Version 9.0(4)24 <context>
```
```
"stdout": [
"\nDevice Manager Version 6.4(5)\n\nCompiled on Tue 23-Aug-16 09:36 PDT by builders\n\
```
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Use the ASA command module for a `show version` on an ASA context firewall.
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| True | asa_command loses first line for show version, possible prompt detection issues? - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
asa_command
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0 (stable-2.2 c920c8bc3b) last updated 2016/11/08 21:29:37 (GMT +000)
lib/ansible/modules/core: (detached HEAD 164225aa43) last updated 2016/11/08 21:29:47 (GMT +000)
lib/ansible/modules/extras: (detached HEAD 18bb736cc2) last updated 2016/11/08 21:29:55 (GMT +000)
config file = /root/.ansible.cfg
configured module search path = ['/modules/', '/vendor/f5-ansible/library']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
##### SUMMARY
<!--- Explain the problem briefly -->
it appears the first line of `show version` when using the asa_command module is missing.
```
Cisco Adaptive Security Appliance Software Version 9.0(4)24 <context>
```
```
"stdout": [
"\nDevice Manager Version 6.4(5)\n\nCompiled on Tue 23-Aug-16 09:36 PDT by builders\n\
```
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
Use the ASA command module for a `show version` on an ASA context firewall.
<!--- Paste example playbooks or commands between quotes below -->
```
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
| main | asa command loses first line for show version possible prompt detection issues issue type bug report component name asa command ansible version ansible stable last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file root ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific summary it appears the first line of show version when using the asa command module is missing cisco adaptive security appliance software version stdout ndevice manager version n ncompiled on tue aug pdt by builders n steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used use the asa command module for a show version on an asa context firewall expected results actual results | 1 |
312,740 | 26,873,910,176 | IssuesEvent | 2023-02-04 20:13:21 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | closed | Teste de generalizacao para a tag Informações Institucionais - Leis Municipais - Espinosa | generalization test development template - Síntese tecnologia e informatica (88) tag - Informações Institucionais subtag - Leis Municipais | DoD: Realizar o teste de Generalização do validador da tag Informações Institucionais - Leis Municipais para o Município de Espinosa. | 1.0 | Teste de generalizacao para a tag Informações Institucionais - Leis Municipais - Espinosa - DoD: Realizar o teste de Generalização do validador da tag Informações Institucionais - Leis Municipais para o Município de Espinosa. | non_main | teste de generalizacao para a tag informações institucionais leis municipais espinosa dod realizar o teste de generalização do validador da tag informações institucionais leis municipais para o município de espinosa | 0 |
376,799 | 26,217,822,219 | IssuesEvent | 2023-01-04 12:31:17 | demarches-simplifiees/demarches-simplifiees.fr | https://api.github.com/repos/demarches-simplifiees/demarches-simplifiees.fr | closed | Email de suppression pas dutout rassurant | documentation good first issue | Est-il possible de désactiver le mail à destination des utilisateurs leur indiquant que le dossier va bientôt être supprimé?
Les 3/4 comprennent que c'est leurs dossiers de demande de logement qui va être supprimé et non pas leurs données hébergées pour 1 mois sur demarches-simplifiees.fr
Cela génère de grosses incompréhensions, créé du mécontentement et nous fait perdre un temps précieux.
| 1.0 | Email de suppression pas dutout rassurant - Est-il possible de désactiver le mail à destination des utilisateurs leur indiquant que le dossier va bientôt être supprimé?
Les 3/4 comprennent que c'est leurs dossiers de demande de logement qui va être supprimé et non pas leurs données hébergées pour 1 mois sur demarches-simplifiees.fr
Cela génère de grosses incompréhensions, créé du mécontentement et nous fait perdre un temps précieux.
| non_main | email de suppression pas dutout rassurant est il possible de désactiver le mail à destination des utilisateurs leur indiquant que le dossier va bientôt être supprimé les comprennent que c est leurs dossiers de demande de logement qui va être supprimé et non pas leurs données hébergées pour mois sur demarches simplifiees fr cela génère de grosses incompréhensions créé du mécontentement et nous fait perdre un temps précieux | 0 |
35,584 | 31,848,478,257 | IssuesEvent | 2023-09-14 22:13:26 | openaq/openaq-api-v2 | https://api.github.com/repos/openaq/openaq-api-v2 | opened | migrate AWS Lambda to ARM64 | infrastructure | Migrating to the AWS Graviton2 ARM64 runtime provides a pretty significant costs saving opportunity in addition to what appears to be a slight performance boost. The cost difference is:
| Architecture | Cost GB-sec |
| -------- | ------- |
| x86 | $0.0000166667 |
| ARM64 | $0.0000133334 |
This will require local deployment to always use Docker to create the dependencies layer, but this work is largely already done to accomodate deploying from Apple M1 -> the current x86.
| 1.0 | migrate AWS Lambda to ARM64 - Migrating to the AWS Graviton2 ARM64 runtime provides a pretty significant costs saving opportunity in addition to what appears to be a slight performance boost. The cost difference is:
| Architecture | Cost GB-sec |
| -------- | ------- |
| x86 | $0.0000166667 |
| ARM64 | $0.0000133334 |
This will require local deployment to always use Docker to create the dependencies layer, but this work is largely already done to accomodate deploying from Apple M1 -> the current x86.
| non_main | migrate aws lambda to migrating to the aws runtime provides a pretty significant costs saving opportunity in addition to what appears to be a slight performance boost the cost difference is architecture cost gb sec this will require local deployment to always use docker to create the dependencies layer but this work is largely already done to accomodate deploying from apple the current | 0 |
264,856 | 20,035,741,093 | IssuesEvent | 2022-02-02 11:41:07 | Clinical-Genomics/meatballs | https://api.github.com/repos/Clinical-Genomics/meatballs | closed | Too much porridge! | Documentation Effort S Gain L | There are two pages for porridge under the Breakfast folder. I think I caused it... sorry | 1.0 | Too much porridge! - There are two pages for porridge under the Breakfast folder. I think I caused it... sorry | non_main | too much porridge there are two pages for porridge under the breakfast folder i think i caused it sorry | 0 |
3,111 | 11,873,698,359 | IssuesEvent | 2020-03-26 17:43:05 | precice/precice | https://api.github.com/repos/precice/precice | opened | Remove multi-level stubs from API | maintainability | After removing Manifold Mapping in #703 we should not forget to remove the API stubs `hasToEvaluateSurrogateModel` and `hasToEvaluateFineModel` in v3.0. | True | Remove multi-level stubs from API - After removing Manifold Mapping in #703 we should not forget to remove the API stubs `hasToEvaluateSurrogateModel` and `hasToEvaluateFineModel` in v3.0. | main | remove multi level stubs from api after removing manifold mapping in we should not forget to remove the api stubs hastoevaluatesurrogatemodel and hastoevaluatefinemodel in | 1 |
264,083 | 8,305,035,942 | IssuesEvent | 2018-09-22 00:50:32 | facebook/prepack | https://api.github.com/repos/facebook/prepack | opened | Simplifier bug: null is not undefined | Instant Render bug priority: high | If you tweak #2563 by replacing `=== null` with `=== undefined`, then things will be alright. However, when you do the same in an optimized function, things go wrong. Prepacking this...
```js
function f(obj) {
let y = obj.foo;
if (y == null) {
obj.bar = y === undefined;
}
}
__optimize(f);
```
produces the wrong output
```js
var f;
(function () {
var _$1 = this;
var _1 = function (obj) {
var _$0 = obj.foo;
if (_$0 == null) {
obj.bar = true;
}
return void 0;
};
_$1.f = _1;
}).call(this);
```
Again, using --dumpIRFilePath is very useful to see path conditions spelled out. | 1.0 | Simplifier bug: null is not undefined - If you tweak #2563 by replacing `=== null` with `=== undefined`, then things will be alright. However, when you do the same in an optimized function, things go wrong. Prepacking this...
```js
function f(obj) {
let y = obj.foo;
if (y == null) {
obj.bar = y === undefined;
}
}
__optimize(f);
```
produces the wrong output
```js
var f;
(function () {
var _$1 = this;
var _1 = function (obj) {
var _$0 = obj.foo;
if (_$0 == null) {
obj.bar = true;
}
return void 0;
};
_$1.f = _1;
}).call(this);
```
Again, using --dumpIRFilePath is very useful to see path conditions spelled out. | non_main | simplifier bug null is not undefined if you tweak by replacing null with undefined then things will be alright however when you do the same in an optimized function things go wrong prepacking this js function f obj let y obj foo if y null obj bar y undefined optimize f produces the wrong output js var f function var this var function obj var obj foo if null obj bar true return void f call this again using dumpirfilepath is very useful to see path conditions spelled out | 0 |
115,249 | 4,662,180,185 | IssuesEvent | 2016-10-05 02:02:38 | TeamPorcupine/ProjectPorcupine | https://api.github.com/repos/TeamPorcupine/ProjectPorcupine | closed | Localization not being updated when saving settings | bug high priority localization | Go to the settings menu and change the language to anything. Then apply or save. There are no errors, but the new language is not applied.
Noticed by @TomMalbran | 1.0 | Localization not being updated when saving settings - Go to the settings menu and change the language to anything. Then apply or save. There are no errors, but the new language is not applied.
Noticed by @TomMalbran | non_main | localization not being updated when saving settings go to the settings menu and change the language to anything then apply or save there are no errors but the new language is not applied noticed by tommalbran | 0 |
722,417 | 24,861,342,623 | IssuesEvent | 2022-10-27 08:31:33 | input-output-hk/cardano-node | https://api.github.com/repos/input-output-hk/cardano-node | closed | [BUG] - Some genesis file names are different to the protocol parameter names | bug priority medium alonzo-white API&CLI-Backlog | **Internal**
**Area**
*Plutus* Related to Plutus Scripts (Alonzo).
**Summary**
The genesis file names are different to the protocol parameter names. This could lead to configuration issues.
**Steps to reproduce**
Sample genesis file
```
{
"lovelacePerUTxOWord": 34482,
"executionPrices": {
"prSteps":
{ "numerator" : 1,
"denominator" : 100
},
"prMem":
{ "numerator" : 5,
"denominator" : 1000
}
},
"maxTxExUnits": {
"exUnitsMem": 11000000000,
"exUnitsSteps": 11000000000
},
"maxBlockExUnits": {
"exUnitsMem": 110000000000,
"exUnitsSteps": 110000000000
},
"maxValueSize": 5000,
"collateralPercentage": 150,
"maxCollateralInputs": 6
}
```
Sample protocol-parameter output
```
cardano-cli query protocol-parameters
{
...
"utxoCostPerWord": 0,
...
"maxTxExecutionUnits": {
"memory": 1,
"steps": 1
},
...
"executionUnitPrices": {
"priceSteps": 1,
"priceMemory": 1
}
}
```
**Expected behavior**
Identical parameter names.
**System info (please complete the following information):**
All operating systems.
```
git branch -v
* master b70b348c5 Merge #2979
```
| 1.0 | [BUG] - Some genesis file names are different to the protocol parameter names - **Internal**
**Area**
*Plutus* Related to Plutus Scripts (Alonzo).
**Summary**
The genesis file names are different to the protocol parameter names. This could lead to configuration issues.
**Steps to reproduce**
Sample genesis file
```
{
"lovelacePerUTxOWord": 34482,
"executionPrices": {
"prSteps":
{ "numerator" : 1,
"denominator" : 100
},
"prMem":
{ "numerator" : 5,
"denominator" : 1000
}
},
"maxTxExUnits": {
"exUnitsMem": 11000000000,
"exUnitsSteps": 11000000000
},
"maxBlockExUnits": {
"exUnitsMem": 110000000000,
"exUnitsSteps": 110000000000
},
"maxValueSize": 5000,
"collateralPercentage": 150,
"maxCollateralInputs": 6
}
```
Sample protocol-parameter output
```
cardano-cli query protocol-parameters
{
...
"utxoCostPerWord": 0,
...
"maxTxExecutionUnits": {
"memory": 1,
"steps": 1
},
...
"executionUnitPrices": {
"priceSteps": 1,
"priceMemory": 1
}
}
```
**Expected behavior**
Identical parameter names.
**System info (please complete the following information):**
All operating systems.
```
git branch -v
* master b70b348c5 Merge #2979
```
| non_main | some genesis file names are different to the protocol parameter names internal area plutus related to plutus scripts alonzo summary the genesis file names are different to the protocol parameter names this could lead to configuration issues steps to reproduce sample genesis file lovelaceperutxoword executionprices prsteps numerator denominator prmem numerator denominator maxtxexunits exunitsmem exunitssteps maxblockexunits exunitsmem exunitssteps maxvaluesize collateralpercentage maxcollateralinputs sample protocol parameter output cardano cli query protocol parameters utxocostperword maxtxexecutionunits memory steps executionunitprices pricesteps pricememory expected behavior identical parameter names system info please complete the following information all operating systems git branch v master merge | 0 |
533,546 | 15,593,010,657 | IssuesEvent | 2021-03-18 12:25:59 | mlr-org/mlr3spatiotempcv | https://api.github.com/repos/mlr-org/mlr3spatiotempcv | closed | Checkerboard pattern with spcv_block? | Priority: High Status: In Progress Type: Bug | Dear mlr3spatiotempcv team,
First, many thanks for your hard work on this excellent resource.
I am having an issues producing a checkerboard sampling pattern using `spcv_block`. Instead of getting a checkerboard spatial partitioning, I always get something that looks more like a random sampling pattern. I have been successful creating a checkerboard pattern using the `blockCV` functions directly.
Here is a reproducible example that fails to produce a checkerboard sampling pattern:
```
library(blockCV)
library(mlr3)
library(mlr3spatiotempcv)
x <- runif(5000, -80.5, -75)
y <- runif(5000, 39.7, 42)
data <- data.frame(spp="test",
label=factor(round(runif(length(x), 0, 1))),
x=x,
y=y)
testTask <- TaskClassifST$new(id = "test",
backend = data,
target = "label",
positive="1",
extra_args = list(coordinate_names=c("x", "y"),
crs="EPSG: 4326"))
blockSamp <- rsmp("spcv_block",
folds=2,
range=50000,
selection="checkerboard")
blockSamp$instantiate(testTask)
autoplot(blockSamp, testTask)
```

| 1.0 | Checkerboard pattern with spcv_block? - Dear mlr3spatiotempcv team,
First, many thanks for your hard work on this excellent resource.
I am having an issues producing a checkerboard sampling pattern using `spcv_block`. Instead of getting a checkerboard spatial partitioning, I always get something that looks more like a random sampling pattern. I have been successful creating a checkerboard pattern using the `blockCV` functions directly.
Here is a reproducible example that fails to produce a checkerboard sampling pattern:
```
library(blockCV)
library(mlr3)
library(mlr3spatiotempcv)
x <- runif(5000, -80.5, -75)
y <- runif(5000, 39.7, 42)
data <- data.frame(spp="test",
label=factor(round(runif(length(x), 0, 1))),
x=x,
y=y)
testTask <- TaskClassifST$new(id = "test",
backend = data,
target = "label",
positive="1",
extra_args = list(coordinate_names=c("x", "y"),
crs="EPSG: 4326"))
blockSamp <- rsmp("spcv_block",
folds=2,
range=50000,
selection="checkerboard")
blockSamp$instantiate(testTask)
autoplot(blockSamp, testTask)
```

| non_main | checkerboard pattern with spcv block dear team first many thanks for your hard work on this excellent resource i am having an issues producing a checkerboard sampling pattern using spcv block instead of getting a checkerboard spatial partitioning i always get something that looks more like a random sampling pattern i have been successful creating a checkerboard pattern using the blockcv functions directly here is a reproducible example that fails to produce a checkerboard sampling pattern library blockcv library library x runif y runif data data frame spp test label factor round runif length x x x y y testtask taskclassifst new id test backend data target label positive extra args list coordinate names c x y crs epsg blocksamp rsmp spcv block folds range selection checkerboard blocksamp instantiate testtask autoplot blocksamp testtask | 0 |
5,011 | 25,758,880,641 | IssuesEvent | 2022-12-08 18:39:06 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Add background colors to table icons | affects: ux work: frontend status: draft restricted: maintainers | 
<img width="399" alt="Screenshot 2022-12-06 at 7 53 52 PM" src="https://user-images.githubusercontent.com/11032856/205937510-d7298211-ccf0-4219-8268-38710274d09b.png">
| True | Add background colors to table icons - 
<img width="399" alt="Screenshot 2022-12-06 at 7 53 52 PM" src="https://user-images.githubusercontent.com/11032856/205937510-d7298211-ccf0-4219-8268-38710274d09b.png">
| main | add background colors to table icons img width alt screenshot at pm src | 1 |
305,967 | 26,423,847,879 | IssuesEvent | 2023-01-14 00:22:41 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | closed | Fix trigonometric_functions.test_numpy_deg2rad | NumPy Frontend Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3855435433/jobs/6570511859" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/3865845421/jobs/6589592684" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3864404459/jobs/6587161901" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/3865075588/jobs/6588337825" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
| 1.0 | Fix trigonometric_functions.test_numpy_deg2rad - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/3855435433/jobs/6570511859" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/3865845421/jobs/6589592684" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/3864404459/jobs/6587161901" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/3865075588/jobs/6588337825" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>Not found</summary>
Not found
</details>
| non_main | fix trigonometric functions test numpy tensorflow img src torch img src numpy img src jax img src not found not found | 0 |
250,463 | 7,977,427,988 | IssuesEvent | 2018-07-17 15:17:40 | kubevirt/kubevirt | https://api.github.com/repos/kubevirt/kubevirt | reopened | Vms fail to launch with Insufficient devices.kubevirt.io/tun | area/handler kind/bug priority/critical-urgent | <!-- This form is for bug reports and feature requests ONLY!
Also make sure that you visit our User Guide at https://kubevirt.io/user-guide/
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
>
/kind bug
**What happened**:
try to launch a vm on 0.7.0 . the related pod stays in pending mode, with the following message
Warning FailedScheduling 5s (x6 over 20s) default-scheduler 0/1 nodes are available: 1 Insufficient devices.kubevirt.io/tun.
**What you expected to happen**:
vm launches
**How to reproduce it (as minimally and precisely as possible)**:
deploy 0.7.0 with a configmap to useEmulation
create a vm
cry
**Anything else we need to know?**:
**Environment**:
- KubeVirt version (use `virtctl version`):
Client Version: version.Info{GitVersion:"v0.7.0", GitCommit:"b5b91243f540739eb5db61af89b2f1e5ba449dfa", GitTreeState:"clean", BuildDate:"2018-07-04T14:16:40Z", GoVersion:"go1.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: &version.Info{GitVersion:"v0.7.0", GitCommit:"b5b91243f540739eb5db61af89b2f1e5ba449dfa", GitTreeState:"clean", BuildDate:"2018-07-04T14:16:40Z", GoVersion:"go1.10", Compiler:"gc", Platform:"linux/amd64"}
- Kubernetes version (use `kubectl version`):
oc v3.10.0-rc.0+c20e215
kubernetes v1.10.0+b81c8f8
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://kubevirt:8443
openshift v3.10.0-rc.0+e132a20-38
kubernetes v1.10.0+b81c8f8
- VM or VMI specifications:
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others: | 1.0 | Vms fail to launch with Insufficient devices.kubevirt.io/tun - <!-- This form is for bug reports and feature requests ONLY!
Also make sure that you visit our User Guide at https://kubevirt.io/user-guide/
-->
**Is this a BUG REPORT or FEATURE REQUEST?**:
>
/kind bug
**What happened**:
try to launch a vm on 0.7.0 . the related pod stays in pending mode, with the following message
Warning FailedScheduling 5s (x6 over 20s) default-scheduler 0/1 nodes are available: 1 Insufficient devices.kubevirt.io/tun.
**What you expected to happen**:
vm launches
**How to reproduce it (as minimally and precisely as possible)**:
deploy 0.7.0 with a configmap to useEmulation
create a vm
cry
**Anything else we need to know?**:
**Environment**:
- KubeVirt version (use `virtctl version`):
Client Version: version.Info{GitVersion:"v0.7.0", GitCommit:"b5b91243f540739eb5db61af89b2f1e5ba449dfa", GitTreeState:"clean", BuildDate:"2018-07-04T14:16:40Z", GoVersion:"go1.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: &version.Info{GitVersion:"v0.7.0", GitCommit:"b5b91243f540739eb5db61af89b2f1e5ba449dfa", GitTreeState:"clean", BuildDate:"2018-07-04T14:16:40Z", GoVersion:"go1.10", Compiler:"gc", Platform:"linux/amd64"}
- Kubernetes version (use `kubectl version`):
oc v3.10.0-rc.0+c20e215
kubernetes v1.10.0+b81c8f8
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://kubevirt:8443
openshift v3.10.0-rc.0+e132a20-38
kubernetes v1.10.0+b81c8f8
- VM or VMI specifications:
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others: | non_main | vms fail to launch with insufficient devices kubevirt io tun this form is for bug reports and feature requests only also make sure that you visit our user guide at is this a bug report or feature request kind bug what happened try to launch a vm on the related pod stays in pending mode with the following message warning failedscheduling over default scheduler nodes are available insufficient devices kubevirt io tun what you expected to happen vm launches how to reproduce it as minimally and precisely as possible deploy with a configmap to useemulation create a vm cry anything else we need to know environment kubevirt version use virtctl version client version version info gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux server version version info gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux kubernetes version use kubectl version oc rc kubernetes features basic auth gssapi kerberos spnego server openshift rc kubernetes vm or vmi specifications cloud provider or hardware configuration os e g from etc os release kernel e g uname a install tools others | 0 |
2,351 | 8,405,772,224 | IssuesEvent | 2018-10-11 16:03:09 | ansible/ansible | https://api.github.com/repos/ansible/ansible | closed | Add v2 api support for pagerduty module | affects_2.6 feature module needs_maintainer support:community waiting_on_contributor | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
pagerduty
##### ANSIBLE VERSION
any
##### CONFIGURATION
any
##### OS / ENVIRONMENT
any
##### SUMMARY
Pagerduty has announced end of support for their V1 API on October 19th of this year. Would like to request support for v2 well ahead of that date to not have a panic at the last minute. The v1 api is currently hard coded in the module. See here: https://github.com/ansible/ansible-modules-extras/blob/6c7d63b15c77126b4d6a8a7668545555578469c5/monitoring/pagerduty.py#L185
##### STEPS TO REPRODUCE
Use pagerduty module with a v2 token
##### EXPECTED RESULTS
Support for the v2 api
##### ACTUAL RESULTS
400 bad request
| True | Add v2 api support for pagerduty module - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
pagerduty
##### ANSIBLE VERSION
any
##### CONFIGURATION
any
##### OS / ENVIRONMENT
any
##### SUMMARY
Pagerduty has announced end of support for their V1 API on October 19th of this year. Would like to request support for v2 well ahead of that date to not have a panic at the last minute. The v1 api is currently hard coded in the module. See here: https://github.com/ansible/ansible-modules-extras/blob/6c7d63b15c77126b4d6a8a7668545555578469c5/monitoring/pagerduty.py#L185
##### STEPS TO REPRODUCE
Use pagerduty module with a v2 token
##### EXPECTED RESULTS
Support for the v2 api
##### ACTUAL RESULTS
400 bad request
| main | add api support for pagerduty module issue type feature idea component name pagerduty ansible version any configuration any os environment any summary pagerduty has announced end of support for their api on october of this year would like to request support for well ahead of that date to not have a panic at the last minute the api is currently hard coded in the module see here steps to reproduce use pagerduty module with a token expected results support for the api actual results bad request | 1 |
2,041 | 6,889,182,926 | IssuesEvent | 2017-11-22 09:30:46 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Chasm paths are all kinds of messed up | Maintainability/Hinders improvements | Originally chasms deleted things, and chasm/straight_down dropped things a z level
Now for some reason all the animation code is defined under
/turf/open/chasm/straight_down/lava_land_surface
A lavaland only variant, with the wrong path at that because lavaland chasms delete rather than drop
Which leads to an even sillier path
/turf/open/chasm/straight_down/lava_land_surface/normal_air
@Xhuis I guess since you added this code (I think?), I'm very confused why you put it in the place you did | True | Chasm paths are all kinds of messed up - Originally chasms deleted things, and chasm/straight_down dropped things a z level
Now for some reason all the animation code is defined under
/turf/open/chasm/straight_down/lava_land_surface
A lavaland only variant, with the wrong path at that because lavaland chasms delete rather than drop
Which leads to an even sillier path
/turf/open/chasm/straight_down/lava_land_surface/normal_air
@Xhuis I guess since you added this code (I think?), I'm very confused why you put it in the place you did | main | chasm paths are all kinds of messed up originally chasms deleted things and chasm straight down dropped things a z level now for some reason all the animation code is defined under turf open chasm straight down lava land surface a lavaland only variant with the wrong path at that because lavaland chasms delete rather than drop which leads to an even sillier path turf open chasm straight down lava land surface normal air xhuis i guess since you added this code i think i m very confused why you put it in the place you did | 1 |
4,141 | 19,686,136,421 | IssuesEvent | 2022-01-11 22:25:52 | aws/serverless-application-model | https://api.github.com/repos/aws/serverless-application-model | closed | Unable to define Tags on Simple table | blocked/close-if-inactive maintainer/need-response | ### Description:
<!-- Briefly describe the bug you are facing.-->
Unable to define Tags on AWS::Serverless::SimpleTable table in json, getting an error "Invalid Serverless Application Specification document"
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
1. Define Cloud Formation Template (attached) - first without tags on dynamo table, then with tags.
2. Attempt to deploy.
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
Getting the error
### Expected result:
<!-- Describe what you expected.-->
Dynamo table to be created
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Windows
2. If using SAM CLI, `sam --version`:
3. AWS region: us-east-1
`Add --debug flag to any SAM CLI commands you are running`

[cloudformation-terraformresources.txt](https://github.com/aws/serverless-application-model/files/7830522/cloudformation-terraformresources.txt)
| True | Unable to define Tags on Simple table - ### Description:
<!-- Briefly describe the bug you are facing.-->
Unable to define Tags on AWS::Serverless::SimpleTable table in json, getting an error "Invalid Serverless Application Specification document"
### Steps to reproduce:
<!-- Provide detailed steps to replicate the bug, including steps from third party tools (CDK, etc.) -->
1. Define Cloud Formation Template (attached) - first without tags on dynamo table, then with tags.
2. Attempt to deploy.
### Observed result:
<!-- Please provide command output with `--debug` flag set.-->
Getting the error
### Expected result:
<!-- Describe what you expected.-->
Dynamo table to be created
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Windows
2. If using SAM CLI, `sam --version`:
3. AWS region: us-east-1
`Add --debug flag to any SAM CLI commands you are running`

[cloudformation-terraformresources.txt](https://github.com/aws/serverless-application-model/files/7830522/cloudformation-terraformresources.txt)
| main | unable to define tags on simple table description unable to define tags on aws serverless simpletable table in json getting an error invalid serverless application specification document steps to reproduce define cloud formation template attached first without tags on dynamo table then with tags attempt to deploy observed result getting the error expected result dynamo table to be created additional environment details ex windows mac amazon linux etc os windows if using sam cli sam version aws region us east add debug flag to any sam cli commands you are running | 1 |
117,766 | 9,958,731,089 | IssuesEvent | 2019-07-05 22:55:14 | ODIQueensland/data-curator | https://api.github.com/repos/ODIQueensland/data-curator | closed | Acceptance Test v1.1.0 | i:User-Acceptance-Test | Sponsor to perform acceptance tests
- [download](https://github.com/ODIQueensland/data-curator/releases), install, and test Data Curator
- [report issues](https://github.com/ODIQueensland/data-curator/issues/new?template=bug.md&labels=problem:Bug&assignee=Stephen-Gates)
cc: @louisjasek | 1.0 | Acceptance Test v1.1.0 - Sponsor to perform acceptance tests
- [download](https://github.com/ODIQueensland/data-curator/releases), install, and test Data Curator
- [report issues](https://github.com/ODIQueensland/data-curator/issues/new?template=bug.md&labels=problem:Bug&assignee=Stephen-Gates)
cc: @louisjasek | non_main | acceptance test sponsor to perform acceptance tests install and test data curator cc louisjasek | 0 |
2,756 | 9,872,894,334 | IssuesEvent | 2019-06-22 09:12:05 | arcticicestudio/snowsaw | https://api.github.com/repos/arcticicestudio/snowsaw | opened | remark-lint | context-workflow scope-maintainability type-feature | <p align="center"><img src="https://raw.githubusercontent.com/remarkjs/remark-lint/02295bc/logo.svg?sanitize=true" width="20%" /></p>
Integrate [remark-lint][] which is built on [remark][], the powerful Markdown processor powered by plugins such as remark-lint.
> Ensuring the markdown you (and contributors) write is of great quality will provide better rendering in all the different markdown parsers, and makes sure less refactoring is needed afterwards.
remark-lint can be used through [remark-cli][npm-remark-cli] through a preset. This preset will be [remark-preset-lint-arcticicestudio][gh-remark-preset-lint-arcticicestudio], the custom preset that implements the [Arctic ice Studio Markdown Style Guide][styleguide-markdown].
Since the custom preset is still in major version `0` note that the version range should be `>=0.x.x <1.0.0` to avoid NPM's “SemVer Major Zero Caveat”. When defining package versions with the the carat `^` or tilde `~` range selector it won't affect packages with a major version of `0`. NPM will resolve these packages to their exact version until the major version is greater or equal to `1`.
To avoid this caveat the more detailed version range `>=0.x.x <1.0.0` should be used to resolve all versions greater or equal to `0.x.x` but less than `1.0.0`. This will always use the latest `0.x.x` version and removes the need to increment the version manually on each new release.
<p align="center"><img src="https://raw.githubusercontent.com/remarkjs/remark-lint/02295bc/screenshot.png" /></p>
### Configuration
The `.remarkrc.js` configuration file will be placed in the project root as well as the `.remarkignore` file to also define ignore pattern.
### NPM script/task
To allow to run the Markdown linting separately a `lint:md` npm script/task will be added to be included in the main `lint` script flow.
## Tasks
- [ ] Install [remark-cli][npm-remark-cli] and [remark-preset-lint-arcticicestudio][npm-remark-preset-lint-arcticicestudio] packages to `devDependencies`.
- [ ] Implement `.remarkrc.js` configuration file.
- [ ] Implement `.remarkignore` ignore pattern file.
- [ ] Implement npm `lint:md` script/task.
- [ ] Lint current code base for the first time and fix possible Markdown style guide violations.
[remark]: https://remark.js.org
[remark-lint]: https://github.com/remarkjs/remark-lint
[npm-remark-cli]: https://www.npmjs.com/package/remark-cli
[npm-remark-preset-lint-arcticicestudio]: https://www.npmjs.com/package/remark-preset-lint-arcticicestudio
[gh-remark-preset-lint-arcticicestudio]: https://github.com/arcticicestudio/remark-preset-lint-arcticicestudio
[styleguide-markdown]: https://arcticicestudio.github.io/styleguide-markdown | True | remark-lint - <p align="center"><img src="https://raw.githubusercontent.com/remarkjs/remark-lint/02295bc/logo.svg?sanitize=true" width="20%" /></p>
Integrate [remark-lint][] which is built on [remark][], the powerful Markdown processor powered by plugins such as remark-lint.
> Ensuring the markdown you (and contributors) write is of great quality will provide better rendering in all the different markdown parsers, and makes sure less refactoring is needed afterwards.
remark-lint can be used through [remark-cli][npm-remark-cli] through a preset. This preset will be [remark-preset-lint-arcticicestudio][gh-remark-preset-lint-arcticicestudio], the custom preset that implements the [Arctic ice Studio Markdown Style Guide][styleguide-markdown].
Since the custom preset is still in major version `0` note that the version range should be `>=0.x.x <1.0.0` to avoid NPM's “SemVer Major Zero Caveat”. When defining package versions with the the carat `^` or tilde `~` range selector it won't affect packages with a major version of `0`. NPM will resolve these packages to their exact version until the major version is greater or equal to `1`.
To avoid this caveat the more detailed version range `>=0.x.x <1.0.0` should be used to resolve all versions greater or equal to `0.x.x` but less than `1.0.0`. This will always use the latest `0.x.x` version and removes the need to increment the version manually on each new release.
<p align="center"><img src="https://raw.githubusercontent.com/remarkjs/remark-lint/02295bc/screenshot.png" /></p>
### Configuration
The `.remarkrc.js` configuration file will be placed in the project root as well as the `.remarkignore` file to also define ignore pattern.
### NPM script/task
To allow to run the Markdown linting separately a `lint:md` npm script/task will be added to be included in the main `lint` script flow.
## Tasks
- [ ] Install [remark-cli][npm-remark-cli] and [remark-preset-lint-arcticicestudio][npm-remark-preset-lint-arcticicestudio] packages to `devDependencies`.
- [ ] Implement `.remarkrc.js` configuration file.
- [ ] Implement `.remarkignore` ignore pattern file.
- [ ] Implement npm `lint:md` script/task.
- [ ] Lint current code base for the first time and fix possible Markdown style guide violations.
[remark]: https://remark.js.org
[remark-lint]: https://github.com/remarkjs/remark-lint
[npm-remark-cli]: https://www.npmjs.com/package/remark-cli
[npm-remark-preset-lint-arcticicestudio]: https://www.npmjs.com/package/remark-preset-lint-arcticicestudio
[gh-remark-preset-lint-arcticicestudio]: https://github.com/arcticicestudio/remark-preset-lint-arcticicestudio
[styleguide-markdown]: https://arcticicestudio.github.io/styleguide-markdown | main | remark lint integrate which is built on the powerful markdown processor powered by plugins such as remark lint ensuring the markdown you and contributors write is of great quality will provide better rendering in all the different markdown parsers and makes sure less refactoring is needed afterwards remark lint can be used through through a preset this preset will be the custom preset that implements the since the custom preset is still in major version note that the version range should be x x to avoid npm s “semver major zero caveat” when defining package versions with the the carat or tilde range selector it won t affect packages with a major version of npm will resolve these packages to their exact version until the major version is greater or equal to to avoid this caveat the more detailed version range x x should be used to resolve all versions greater or equal to x x but less than this will always use the latest x x version and removes the need to increment the version manually on each new release configuration the remarkrc js configuration file will be placed in the project root as well as the remarkignore file to also define ignore pattern npm script task to allow to run the markdown linting separately a lint md npm script task will be added to be included in the main lint script flow tasks install and packages to devdependencies implement remarkrc js configuration file implement remarkignore ignore pattern file implement npm lint md script task lint current code base for the first time and fix possible markdown style guide violations | 1 |
1,173 | 5,095,182,264 | IssuesEvent | 2017-01-03 14:25:36 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | ec2_elb_lb intermittently failing to create an ELB with KeyError exception | affects_2.2 aws bug_report cloud waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ec2_elb_lb.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0 (devel fa5f8a7543) last updated 2016/09/18 02:10:05 (GMT -700)
lib/ansible/modules/core: (detached HEAD 488f082761) last updated 2016/09/18 02:10:48 (GMT -700)
lib/ansible/modules/extras: (detached HEAD 24da3602c6) last updated 2016/09/18 02:10:57 (GMT -700)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
When running a playbook to create multiple ELBs, some of the ELBs would intermittently fail to get created with a KeyError exception.
I tracked this issue down to some data structure inconsistency in the _set_elb_listeners function.
It would appear that the data type stored in self.elb.listeners is sometimes a dict and other times a tuple.
I hacked around this issue but am not sure if my fix has broken anything else. See the diff below:
```
user@utf:/tmp$ diff ec2_elb_lb.py mine.py
755,758c755,760
< if existing_listener[0] == int(listener['load_balancer_port']):
< existing_listener_found = self._api_listener_as_tuple(existing_listener)
< break
<
---
> try:
> if self._listener_as_tuple(existing_listener)[0] == int(listener['load_balancer_port']):
> existing_listener_found = self._listener_as_tuple(existing_listener)
> break
> except KeyError:
> self.module.fail_json(msg="Ran into keyerror bug. self.elb.listeners is '%s'" % self.elb.listeners)
776c778
< existing_listener_tuple = self._api_listener_as_tuple(existing_listener)
---
> existing_listener_tuple = self._listener_as_tuple(existing_listener)
```
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Create Control ELB
ec2_elb_lb:
name: "Control-ELB"
state: present
security_group_names:
- "{{ sg_name }}"
region: "{{ aws_region }}"
purge_instance_ids: true
idle_timeout: 60
subnets:
- "{{ subnet_id }}"
purge_listeners: true
listeners:
- protocol: tcp
load_balancer_port: 443
instance_port: 443
health_check:
ping_protocol: http
ping_port: 80
ping_path: "/health.html"
response_timeout: 5
interval: 10
unhealthy_threshold: 2
healthy_threshold: 2
- name: Create Service ELB
ec2_elb_lb:
name: "Service-ELB"
state: present
security_group_names:
- "{{ sg_name }}"
region: "{{ aws_region }}"
purge_instance_ids: true
idle_timeout: 3600
subnets:
- "{{ subnet_id }}"
purge_listeners: true
listeners:
- protocol: tcp
load_balancer_port: 443
instance_port: 443
health_check:
ping_protocol: https
ping_port: 443
ping_path: "/health.html"
response_timeout: 5
interval: 10
unhealthy_threshold: 2
healthy_threshold: 2
- name: Create Internal ELB
ec2_elb_lb:
name: "Int-ELB"
state: present
subnets:
- "{{ subnet_id }}"
security_group_names:
- "{{ sg_name }}"
region: "{{ aws_region }}"
purge_instance_ids: true
scheme: "internal"
idle_timeout: 3600
purge_listeners: true
listeners:
- protocol: tcp
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: tcp
ping_port: 80
response_timeout: 5
interval: 10
unhealthy_threshold: 2
healthy_threshold: 2
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect all ELBs to be created properly with the given settings.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
The ELBs get created but Ansible quits with a Python exception which breaks further playbooks from running in a workflow.
<!--- Paste verbatim command output between quotes below -->
```
TASK [Create Internal ELB] ******************************************
task path: /home/nsroot/ansible-work/playbooks/create-elbs.yml:96
Using module file /home/nsroot/ansible/lib/ansible/modules/core/cloud/amazon/ec2_elb_lb.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: nsroot
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572 `" && echo ansible-tmp-1474500949.27-185632929248572="` echo $HOME/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572 `" ) && sleep 0'
<localhost> PUT /tmp/tmp9oxobX TO /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py; rm -rf "/home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py", line 1354, in <module>
main()
File "/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py", line 1338, in main
elb_man.ensure_ok()
File "/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py", line 410, in _do_op
return op(*args, **kwargs)
File "/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py", line 484, in ensure_ok
self._set_elb_listeners()
File "/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py", line 755, in _set_elb_listeners
if existing_listener[0] == int(listener['load_balancer_port']):
KeyError: 0
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ec2_elb_lb"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\", line 1354, in <module>\n main()\n File \"/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\", line 1338, in main\n elb_man.ensure_ok()\n File \"/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\", line 410, in _do_op\n return op(*args, **kwargs)\n File \"/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\", line 484, in ensure_ok\n self._set_elb_listeners()\n File \"/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\", line 755, in _set_elb_listeners\n if existing_listener[0] == int(listener['load_balancer_port']):\nKeyError: 0\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit @/home/nsroot/ansible-work/playbooks/create-elbs.retry
```
| True | ec2_elb_lb intermittently failing to create an ELB with KeyError exception - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
ec2_elb_lb.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0 (devel fa5f8a7543) last updated 2016/09/18 02:10:05 (GMT -700)
lib/ansible/modules/core: (detached HEAD 488f082761) last updated 2016/09/18 02:10:48 (GMT -700)
lib/ansible/modules/extras: (detached HEAD 24da3602c6) last updated 2016/09/18 02:10:57 (GMT -700)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
Default
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
When running a playbook to create multiple ELBs, some of the ELBs would intermittently fail to get created with a KeyError exception.
I tracked this issue down to some data structure inconsistency in the _set_elb_listeners function.
It would appear that the data type stored in self.elb.listeners is sometimes a dict and other times a tuple.
I hacked around this issue but am not sure if my fix has broken anything else. See the diff below:
```
user@utf:/tmp$ diff ec2_elb_lb.py mine.py
755,758c755,760
< if existing_listener[0] == int(listener['load_balancer_port']):
< existing_listener_found = self._api_listener_as_tuple(existing_listener)
< break
<
---
> try:
> if self._listener_as_tuple(existing_listener)[0] == int(listener['load_balancer_port']):
> existing_listener_found = self._listener_as_tuple(existing_listener)
> break
> except KeyError:
> self.module.fail_json(msg="Ran into keyerror bug. self.elb.listeners is '%s'" % self.elb.listeners)
776c778
< existing_listener_tuple = self._api_listener_as_tuple(existing_listener)
---
> existing_listener_tuple = self._listener_as_tuple(existing_listener)
```
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```
- name: Create Control ELB
ec2_elb_lb:
name: "Control-ELB"
state: present
security_group_names:
- "{{ sg_name }}"
region: "{{ aws_region }}"
purge_instance_ids: true
idle_timeout: 60
subnets:
- "{{ subnet_id }}"
purge_listeners: true
listeners:
- protocol: tcp
load_balancer_port: 443
instance_port: 443
health_check:
ping_protocol: http
ping_port: 80
ping_path: "/health.html"
response_timeout: 5
interval: 10
unhealthy_threshold: 2
healthy_threshold: 2
- name: Create Service ELB
ec2_elb_lb:
name: "Service-ELB"
state: present
security_group_names:
- "{{ sg_name }}"
region: "{{ aws_region }}"
purge_instance_ids: true
idle_timeout: 3600
subnets:
- "{{ subnet_id }}"
purge_listeners: true
listeners:
- protocol: tcp
load_balancer_port: 443
instance_port: 443
health_check:
ping_protocol: https
ping_port: 443
ping_path: "/health.html"
response_timeout: 5
interval: 10
unhealthy_threshold: 2
healthy_threshold: 2
- name: Create Internal ELB
ec2_elb_lb:
name: "Int-ELB"
state: present
subnets:
- "{{ subnet_id }}"
security_group_names:
- "{{ sg_name }}"
region: "{{ aws_region }}"
purge_instance_ids: true
scheme: "internal"
idle_timeout: 3600
purge_listeners: true
listeners:
- protocol: tcp
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: tcp
ping_port: 80
response_timeout: 5
interval: 10
unhealthy_threshold: 2
healthy_threshold: 2
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect all ELBs to be created properly with the given settings.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
The ELBs get created but Ansible quits with a Python exception which breaks further playbooks from running in a workflow.
<!--- Paste verbatim command output between quotes below -->
```
TASK [Create Internal ELB] ******************************************
task path: /home/nsroot/ansible-work/playbooks/create-elbs.yml:96
Using module file /home/nsroot/ansible/lib/ansible/modules/core/cloud/amazon/ec2_elb_lb.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: nsroot
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572 `" && echo ansible-tmp-1474500949.27-185632929248572="` echo $HOME/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572 `" ) && sleep 0'
<localhost> PUT /tmp/tmp9oxobX TO /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/ec2_elb_lb.py; rm -rf "/home/nsroot/.ansible/tmp/ansible-tmp-1474500949.27-185632929248572/" > /dev/null 2>&1 && sleep 0'
An exception occurred during task execution. The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py", line 1354, in <module>
main()
File "/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py", line 1338, in main
elb_man.ensure_ok()
File "/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py", line 410, in _do_op
return op(*args, **kwargs)
File "/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py", line 484, in ensure_ok
self._set_elb_listeners()
File "/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py", line 755, in _set_elb_listeners
if existing_listener[0] == int(listener['load_balancer_port']):
KeyError: 0
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "ec2_elb_lb"
},
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\", line 1354, in <module>\n main()\n File \"/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\", line 1338, in main\n elb_man.ensure_ok()\n File \"/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\", line 410, in _do_op\n return op(*args, **kwargs)\n File \"/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\", line 484, in ensure_ok\n self._set_elb_listeners()\n File \"/tmp/ansible_Hbrcmm/ansible_module_ec2_elb_lb.py\", line 755, in _set_elb_listeners\n if existing_listener[0] == int(listener['load_balancer_port']):\nKeyError: 0\n",
"module_stdout": "",
"msg": "MODULE FAILURE"
}
to retry, use: --limit @/home/nsroot/ansible-work/playbooks/create-elbs.retry
```
| main | elb lb intermittently failing to create an elb with keyerror exception issue type bug report component name elb lb py ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables default os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary when running a playbook to create multiple elbs some of the elbs would intermittently fail to get created with a keyerror exception i tracked this issue down to some data structure inconsistency in the set elb listeners function it would appear that the data type stored in self elb listeners is sometimes a dict and other times a tuple i hacked around this issue but am not sure if my fix has broken anything else see the diff below user utf tmp diff elb lb py mine py if existing listener int listener existing listener found self api listener as tuple existing listener break try if self listener as tuple existing listener int listener existing listener found self listener as tuple existing listener break except keyerror self module fail json msg ran into keyerror bug self elb listeners is s self elb listeners existing listener tuple self api listener as tuple existing listener existing listener tuple self listener as tuple existing listener steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name create control elb elb lb name control elb state present security group names sg name region aws region purge instance ids true idle timeout subnets subnet id purge listeners true listeners protocol tcp load balancer port instance port health check ping protocol http ping port ping path health html response timeout interval unhealthy threshold healthy threshold name create service elb elb lb name service elb state present security group names sg name region aws region purge instance ids true idle timeout subnets subnet id purge listeners true listeners protocol tcp load balancer port instance port health check ping protocol https ping port ping path health html response timeout interval unhealthy threshold healthy threshold name create internal elb elb lb name int elb state present subnets subnet id security group names sg name region aws region purge instance ids true scheme internal idle timeout purge listeners true listeners protocol tcp load balancer port instance port health check ping protocol tcp ping port response timeout interval unhealthy threshold healthy threshold expected results i expect all elbs to be created properly with the given settings actual results the elbs get created but ansible quits with a python exception which breaks further playbooks from running in a workflow task task path home nsroot ansible work playbooks create elbs yml using module file home nsroot ansible lib ansible modules core cloud amazon elb lb py establish local connection for user nsroot exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp to home nsroot ansible tmp ansible tmp elb lb py exec bin sh c chmod u x home nsroot ansible tmp ansible tmp home nsroot ansible tmp ansible tmp elb lb py sleep exec bin sh c usr bin python home nsroot ansible tmp ansible tmp elb lb py rm rf home nsroot ansible tmp ansible tmp dev null sleep an exception occurred during task execution the full traceback is traceback most recent call last file tmp ansible hbrcmm ansible module elb lb py line in main file tmp ansible hbrcmm ansible module elb lb py line in main elb man ensure ok file tmp ansible hbrcmm ansible module elb lb py line in do op return op args kwargs file tmp ansible hbrcmm ansible module elb lb py line in ensure ok self set elb listeners file tmp ansible hbrcmm ansible module elb lb py line in set elb listeners if existing listener int listener keyerror fatal failed changed false failed true invocation module name elb lb module stderr traceback most recent call last n file tmp ansible hbrcmm ansible module elb lb py line in n main n file tmp ansible hbrcmm ansible module elb lb py line in main n elb man ensure ok n file tmp ansible hbrcmm ansible module elb lb py line in do op n return op args kwargs n file tmp ansible hbrcmm ansible module elb lb py line in ensure ok n self set elb listeners n file tmp ansible hbrcmm ansible module elb lb py line in set elb listeners n if existing listener int listener nkeyerror n module stdout msg module failure to retry use limit home nsroot ansible work playbooks create elbs retry | 1 |
10,503 | 8,960,966,358 | IssuesEvent | 2019-01-28 08:13:24 | Microsoft/bedrock | https://api.github.com/repos/Microsoft/bedrock | closed | deploying the simple service is not working correctly | bug services-team | The deployment of the simple service is IMHO not working correctly.
the repo name refers to @timfpark and i cannot push to this docker repo.
https://github.com/Microsoft/bedrock/blob/master/services/common/deploy-service#L26

possbile way to go:
we change the values in the file:https://github.com/Microsoft/bedrock/blob/master/services/modules/simple-service/deploy-simple-service
or update the README with an overwrite of the default values.
or we could stop building it and relying on a prebuilt docker image.
The Dockerfile is currently missing the docker package and we would have to mount the docker socket into the container while starting it.
Steps done:
tried to test it via minikube:
`docker build -t bedrock:latest .`
```
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock --net=host -it -v ~/.minikube:/.minikube -v ~/.kube/config:/.kube/config -e TF_VAR_grafana_admin_password="SECRETpass" bedrock:latest /bin/bash
```
```
cd services/environments/dev
./init
./apply
```
@timfpark what do you think?
All the best,
Benjamin
| 1.0 | deploying the simple service is not working correctly - The deployment of the simple service is IMHO not working correctly.
the repo name refers to @timfpark and i cannot push to this docker repo.
https://github.com/Microsoft/bedrock/blob/master/services/common/deploy-service#L26

possbile way to go:
we change the values in the file:https://github.com/Microsoft/bedrock/blob/master/services/modules/simple-service/deploy-simple-service
or update the README with an overwrite of the default values.
or we could stop building it and relying on a prebuilt docker image.
The Dockerfile is currently missing the docker package and we would have to mount the docker socket into the container while starting it.
Steps done:
tried to test it via minikube:
`docker build -t bedrock:latest .`
```
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock --net=host -it -v ~/.minikube:/.minikube -v ~/.kube/config:/.kube/config -e TF_VAR_grafana_admin_password="SECRETpass" bedrock:latest /bin/bash
```
```
cd services/environments/dev
./init
./apply
```
@timfpark what do you think?
All the best,
Benjamin
| non_main | deploying the simple service is not working correctly the deployment of the simple service is imho not working correctly the repo name refers to timfpark and i cannot push to this docker repo possbile way to go we change the values in the file or update the readme with an overwrite of the default values or we could stop building it and relying on a prebuilt docker image the dockerfile is currently missing the docker package and we would have to mount the docker socket into the container while starting it steps done tried to test it via minikube docker build t bedrock latest docker run rm v var run docker sock var run docker sock net host it v minikube minikube v kube config kube config e tf var grafana admin password secretpass bedrock latest bin bash cd services environments dev init apply timfpark what do you think all the best benjamin | 0 |
1,990 | 6,694,279,342 | IssuesEvent | 2017-10-10 00:50:39 | duckduckgo/zeroclickinfo-spice | https://api.github.com/repos/duckduckgo/zeroclickinfo-spice | closed | Bible: example queries failing to trigger | Maintainer Input Requested | The example queries from the IA page do not trigger this IA. The IA page is also missing the data source information.
---
IA Page: http://duck.co/ia/view/bible
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @hunterlang
| True | Bible: example queries failing to trigger - The example queries from the IA page do not trigger this IA. The IA page is also missing the data source information.
---
IA Page: http://duck.co/ia/view/bible
[Maintainer](http://docs.duckduckhack.com/maintaining/guidelines.html): @hunterlang
| main | bible example queries failing to trigger the example queries from the ia page do not trigger this ia the ia page is also missing the data source information ia page hunterlang | 1 |
4,501 | 23,419,183,913 | IssuesEvent | 2022-08-13 12:28:31 | rustsec/advisory-db | https://api.github.com/repos/rustsec/advisory-db | closed | `toml` is unmaintained per its original maintainer | Unmaintained | largely used crate the maintainer recognizes but can no longer maintain
https://github.com/alexcrichton/toml-rs/issues/460 | True | `toml` is unmaintained per its original maintainer - largely used crate the maintainer recognizes but can no longer maintain
https://github.com/alexcrichton/toml-rs/issues/460 | main | toml is unmaintained per its original maintainer largely used crate the maintainer recognizes but can no longer maintain | 1 |
45,565 | 12,877,835,654 | IssuesEvent | 2020-07-11 13:25:58 | msofficesvn/msofficesvn | https://api.github.com/repos/msofficesvn/msofficesvn | closed | Word 2010 x64 error. | Priority-High Type-Defect auto-migrated | ```
What steps will reproduce the problem?
1. Follow the intallation instructions and placed the files in the Startup
folder.
2. Open MS Word (MS Office 2010 version 14.0.6112.5000 64bit)
3. Mesaage box appears which says "compile error in hidden module: cmnIniFile.
This error commonly occurs when code is incompatible with the version, platform
or architecture of this application."
4. All the buttons on the ribbon don't work.
What version of the product are you using? On what operating system?
Version 132
Please provide any additional information below.
IF you can let me know how to build the dotm file in MS Word I can try to fix
this. I am unfamiliar on how Word builds these files.
```
Original issue reported on code.google.com by `DaveAlls...@gmail.com` on 29 May 2012 at 5:22
| 1.0 | Word 2010 x64 error. - ```
What steps will reproduce the problem?
1. Follow the intallation instructions and placed the files in the Startup
folder.
2. Open MS Word (MS Office 2010 version 14.0.6112.5000 64bit)
3. Mesaage box appears which says "compile error in hidden module: cmnIniFile.
This error commonly occurs when code is incompatible with the version, platform
or architecture of this application."
4. All the buttons on the ribbon don't work.
What version of the product are you using? On what operating system?
Version 132
Please provide any additional information below.
IF you can let me know how to build the dotm file in MS Word I can try to fix
this. I am unfamiliar on how Word builds these files.
```
Original issue reported on code.google.com by `DaveAlls...@gmail.com` on 29 May 2012 at 5:22
| non_main | word error what steps will reproduce the problem follow the intallation instructions and placed the files in the startup folder open ms word ms office version mesaage box appears which says compile error in hidden module cmninifile this error commonly occurs when code is incompatible with the version platform or architecture of this application all the buttons on the ribbon don t work what version of the product are you using on what operating system version please provide any additional information below if you can let me know how to build the dotm file in ms word i can try to fix this i am unfamiliar on how word builds these files original issue reported on code google com by davealls gmail com on may at | 0 |
5,649 | 28,491,631,890 | IssuesEvent | 2023-04-18 11:39:50 | OpenRefine/OpenRefine | https://api.github.com/repos/OpenRefine/OpenRefine | closed | Standardize ordering of Java modifiers | enhancement maintainability CI/CD | This is issue is very low and the main idea is to emphasis clean code. i know we might overlook but its much vital in organisation code of conduct cc @antoine2711 @wetneb
| True | Standardize ordering of Java modifiers - This is issue is very low and the main idea is to emphasis clean code. i know we might overlook but its much vital in organisation code of conduct cc @antoine2711 @wetneb
| main | standardize ordering of java modifiers this is issue is very low and the main idea is to emphasis clean code i know we might overlook but its much vital in organisation code of conduct cc wetneb | 1 |
159,119 | 6,040,886,156 | IssuesEvent | 2017-06-10 18:33:24 | roschaefer/story.board | https://api.github.com/repos/roschaefer/story.board | reopened | BUG inserted markup by menu will not render | bug Priority: medium User Story | The inserted markup in text.component create via new menu is not rendered correctly.
– Sensor-Markup should Output the Sensor value, e.g. 37° C
– Event-Markup should Output the Date and Time in German Format (important!), eg. 27. Mai, 9:17 Uhr. | 1.0 | BUG inserted markup by menu will not render - The inserted markup in text.component create via new menu is not rendered correctly.
– Sensor-Markup should Output the Sensor value, e.g. 37° C
– Event-Markup should Output the Date and Time in German Format (important!), eg. 27. Mai, 9:17 Uhr. | non_main | bug inserted markup by menu will not render the inserted markup in text component create via new menu is not rendered correctly – sensor markup should output the sensor value e g ° c – event markup should output the date and time in german format important eg mai uhr | 0 |
14,230 | 4,856,810,590 | IssuesEvent | 2016-11-12 08:08:56 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | new menu item check | No Code Attached Yet | ### Steps to reproduce the issue
It is not allowed that a menu item alias is the same as a sub-folder in your Joomla installation.
The message will be: "Save failed with the following error: A first level menu item alias can't be 'tmp' because 'tmp' is a sub-folder of your joomla installation folder."
My problem is that when you enter a new menu item this check is not done, before creating the menu item.
So you can create a menu item with the name of a sub-folder. The moment the item is creacted you can't save it again.
### Expected result
Check before the item is created.
### Actual result
No check and an error on the frontend of the site because your not allowed to browse a sub-folder.
### System information (as much as possible)
### Additional comments
| 1.0 | new menu item check - ### Steps to reproduce the issue
It is not allowed that a menu item alias is the same as a sub-folder in your Joomla installation.
The message will be: "Save failed with the following error: A first level menu item alias can't be 'tmp' because 'tmp' is a sub-folder of your joomla installation folder."
My problem is that when you enter a new menu item this check is not done, before creating the menu item.
So you can create a menu item with the name of a sub-folder. The moment the item is creacted you can't save it again.
### Expected result
Check before the item is created.
### Actual result
No check and an error on the frontend of the site because your not allowed to browse a sub-folder.
### System information (as much as possible)
### Additional comments
| non_main | new menu item check steps to reproduce the issue it is not allowed that a menu item alias is the same as a sub folder in your joomla installation the message will be save failed with the following error a first level menu item alias can t be tmp because tmp is a sub folder of your joomla installation folder my problem is that when you enter a new menu item this check is not done before creating the menu item so you can create a menu item with the name of a sub folder the moment the item is creacted you can t save it again expected result check before the item is created actual result no check and an error on the frontend of the site because your not allowed to browse a sub folder system information as much as possible additional comments | 0 |
4,490 | 23,375,624,323 | IssuesEvent | 2022-08-11 02:28:07 | restqa/restqa | https://api.github.com/repos/restqa/restqa | closed | [DASHBOARD] Scenario Generation | enhancement good first issue wontfix pair with maintainer | Hello 👋,
### 👀 Background
One of the key feature of the RestQA command line tool is the scenario generation from the curl command.
See article: https://medium.com/restqa/generate-your-api-automation-test-using-curl-5610355d69c4
### ✌️ What is the actual behavior?
Currently the feature is only accessible through the command line tool.
### 🕵️♀️ How to reproduce the current behavior?
1. Install RestQA `npm i -g @restqa/restqa`
2. Initiate a RestQA project `restqa init`
3. Generate a scenario: `restqa generate curl https://jsonplaceholder.typicode.com/todos/1`
### 🤞 What is the expected behavior?
It would be great to be able to access to this key feature from the dashboard.
### 😎 Proposed solution.
- [ ] Create a new section on the dashboard `Scenario generation`
- [ ] Find the best UI approach for the form and button
> Tips: use the existing api `POST /api/restqa/generate`
Cheers.
| True | [DASHBOARD] Scenario Generation - Hello 👋,
### 👀 Background
One of the key feature of the RestQA command line tool is the scenario generation from the curl command.
See article: https://medium.com/restqa/generate-your-api-automation-test-using-curl-5610355d69c4
### ✌️ What is the actual behavior?
Currently the feature is only accessible through the command line tool.
### 🕵️♀️ How to reproduce the current behavior?
1. Install RestQA `npm i -g @restqa/restqa`
2. Initiate a RestQA project `restqa init`
3. Generate a scenario: `restqa generate curl https://jsonplaceholder.typicode.com/todos/1`
### 🤞 What is the expected behavior?
It would be great to be able to access to this key feature from the dashboard.
### 😎 Proposed solution.
- [ ] Create a new section on the dashboard `Scenario generation`
- [ ] Find the best UI approach for the form and button
> Tips: use the existing api `POST /api/restqa/generate`
Cheers.
| main | scenario generation hello 👋 👀 background one of the key feature of the restqa command line tool is the scenario generation from the curl command see article ✌️ what is the actual behavior currently the feature is only accessible through the command line tool 🕵️♀️ how to reproduce the current behavior install restqa npm i g restqa restqa initiate a restqa project restqa init generate a scenario restqa generate curl 🤞 what is the expected behavior it would be great to be able to access to this key feature from the dashboard 😎 proposed solution create a new section on the dashboard scenario generation find the best ui approach for the form and button tips use the existing api post api restqa generate cheers | 1 |
1,241 | 5,300,399,502 | IssuesEvent | 2017-02-10 04:39:02 | caskroom/homebrew-cask | https://api.github.com/repos/caskroom/homebrew-cask | opened | Look into alternative build solutions | awaiting maintainer feedback discussion travis | Travis CI has had a great run but seems to be lagging fairly regularly recently.
Maybe it's time to switch CI providers? (or revisit our build/test scripts)
| True | Look into alternative build solutions - Travis CI has had a great run but seems to be lagging fairly regularly recently.
Maybe it's time to switch CI providers? (or revisit our build/test scripts)
| main | look into alternative build solutions travis ci has had a great run but seems to be lagging fairly regularly recently maybe it s time to switch ci providers or revisit our build test scripts | 1 |
1,870 | 6,577,493,454 | IssuesEvent | 2017-09-12 01:18:04 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | os_router can't take in port id as interface | affects_2.0 cloud feature_idea openstack waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
os_router
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /root/setup-infra/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
No changes made to ansible.cfg
##### OS / ENVIRONMENT
I'm running Ubuntu 14.04, but this module is not platform-specific I don't think.
##### SUMMARY
os_router can't take in a port ID as an internal interface, only a subnet. See:
https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L321
The neutron CLI allows you to specify a port ID as an interface, and therefore allow you to specify an arbitrary IP for that interface. It would be nice if the Ansible os_router module would allow you to do that.
##### STEPS TO REPRODUCE
This added feature would allow you to do something like:
```
- name: Create port for my_net
os_port:
state: present
name: "my_net_port"
network: "my_net"
fixed_ips:
- ip_address: "192.168.100.50"
register: my_net_port_results
- name: Create my router
os_router:
name: my_router
state: present
network: "ext-net"
interfaces:
- port: "{{ my_net_port_results.id }}"
- "some_other_priv_subnet"
```
This would allow the user to specify either a subnet or a port for a router internal interface.
##### EXPECTED RESULTS
The router would have two interfaces with the example playbook shown above. It would have the default gateway of "some_other_priv_subnet", and it would have the ip assigned to "my_net_port".
This would allow subnets to be attached to multiple routers, which currently isn't do-able through the os_router module.
##### ACTUAL RESULTS
TBD
| True | os_router can't take in port id as interface - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
os_router
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file = /root/setup-infra/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
No changes made to ansible.cfg
##### OS / ENVIRONMENT
I'm running Ubuntu 14.04, but this module is not platform-specific I don't think.
##### SUMMARY
os_router can't take in a port ID as an internal interface, only a subnet. See:
https://github.com/ansible/ansible-modules-core/blob/devel/cloud/openstack/os_router.py#L321
The neutron CLI allows you to specify a port ID as an interface, and therefore allow you to specify an arbitrary IP for that interface. It would be nice if the Ansible os_router module would allow you to do that.
##### STEPS TO REPRODUCE
This added feature would allow you to do something like:
```
- name: Create port for my_net
os_port:
state: present
name: "my_net_port"
network: "my_net"
fixed_ips:
- ip_address: "192.168.100.50"
register: my_net_port_results
- name: Create my router
os_router:
name: my_router
state: present
network: "ext-net"
interfaces:
- port: "{{ my_net_port_results.id }}"
- "some_other_priv_subnet"
```
This would allow the user to specify either a subnet or a port for a router internal interface.
##### EXPECTED RESULTS
The router would have two interfaces with the example playbook shown above. It would have the default gateway of "some_other_priv_subnet", and it would have the ip assigned to "my_net_port".
This would allow subnets to be attached to multiple routers, which currently isn't do-able through the os_router module.
##### ACTUAL RESULTS
TBD
| main | os router can t take in port id as interface issue type feature idea component name os router ansible version ansible config file root setup infra ansible cfg configured module search path default w o overrides configuration no changes made to ansible cfg os environment i m running ubuntu but this module is not platform specific i don t think summary os router can t take in a port id as an internal interface only a subnet see the neutron cli allows you to specify a port id as an interface and therefore allow you to specify an arbitrary ip for that interface it would be nice if the ansible os router module would allow you to do that steps to reproduce this added feature would allow you to do something like name create port for my net os port state present name my net port network my net fixed ips ip address register my net port results name create my router os router name my router state present network ext net interfaces port my net port results id some other priv subnet this would allow the user to specify either a subnet or a port for a router internal interface expected results the router would have two interfaces with the example playbook shown above it would have the default gateway of some other priv subnet and it would have the ip assigned to my net port this would allow subnets to be attached to multiple routers which currently isn t do able through the os router module actual results tbd | 1 |
1,849 | 6,577,390,574 | IssuesEvent | 2017-09-12 00:34:54 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | vsphere_guest: vm_nic should include manual MAC change feature | affects_2.3 bug_report cloud feature_idea vmware waiting_on_maintainer | ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
N/A
##### SUMMARY
Currently during VM reconfiguring following options are supported for vm_nic.
```
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
```
I think this should be extended with feature to define MAC address manually, adding
address_type: manual
address: "00:0c:29:ac:70:96"
Final look might be like:
```
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
address_type: manual
address: "00:0c:29:ac:70:96"
```
This functionality looks like might be supported by pysphere, but currently not implemented in Ansible.
This feature might be useful when Ansible is used to rebuild same VMs multiple times and there are static DHCP leases configured for exact MAC addresses.
| True | vsphere_guest: vm_nic should include manual MAC change feature - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
vsphere_guest
##### ANSIBLE VERSION
N/A
##### SUMMARY
Currently during VM reconfiguring following options are supported for vm_nic.
```
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
```
I think this should be extended with feature to define MAC address manually, adding
address_type: manual
address: "00:0c:29:ac:70:96"
Final look might be like:
```
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
address_type: manual
address: "00:0c:29:ac:70:96"
```
This functionality looks like might be supported by pysphere, but currently not implemented in Ansible.
This feature might be useful when Ansible is used to rebuild same VMs multiple times and there are static DHCP leases configured for exact MAC addresses.
| main | vsphere guest vm nic should include manual mac change feature issue type bug report component name vsphere guest ansible version n a summary currently during vm reconfiguring following options are supported for vm nic vm nic type network vm network network type standard i think this should be extended with feature to define mac address manually adding address type manual address ac final look might be like vm nic type network vm network network type standard address type manual address ac this functionality looks like might be supported by pysphere but currently not implemented in ansible this feature might be useful when ansible is used to rebuild same vms multiple times and there are static dhcp leases configured for exact mac addresses | 1 |
2,923 | 10,420,757,683 | IssuesEvent | 2019-09-16 02:33:11 | beefproject/beef | https://api.github.com/repos/beefproject/beef | closed | Continued Maintenance and Development | Maintainability | All BeEF developers have moved on to other projects. There are no active full-time BeEF developers.
Occasionally, myself or others will drop by to keep the project alive. This involves responding to open issues, reviewing pull requests, performing maintenance, and occasionally adding new features.
Development efforts are mostly directed towards maintenance and cleaning up the codebase to make maintenance easier.
For the time being, this means you're more likely to see bug fixes than new features.
---
**BeEF must be able to hook all web browsers**
If BeEF cannot hook a browser, this is a failure of the project, and should be reviewed as a matter of high priority. There are, however, a few exceptions; specifically [Internet Explorer 5 and earlier](https://github.com/beefproject/beef/issues/1392#issuecomment-462132486).
If functionality within the framework is preventing hooking, it will disabled by default, until such time as it can be reviewed and fixed, or potentially removed entirely.
**BeEF should "just work"**
All functionality enabled by default should work out of the box.
This includes trying to make BeEF as user-friendly as possible. A tonne of error handling has been added to smooth some of the sharp edges, and warn on incorrect configuration, resulting in intelligible error messages and warnings rather than stack traces. This also ensures we at least have something to work with when issues are posted on the issue tracker.
Several Linux distributions, such as Pentoo, Parrot, and Kali, ship a packaged version of BeEF (thanks!). Where possible, I'd like to make their lives easier. This largely involves keeping dependencies up-to-date and testing to ensure nothing breaks.
| True | Continued Maintenance and Development - All BeEF developers have moved on to other projects. There are no active full-time BeEF developers.
Occasionally, myself or others will drop by to keep the project alive. This involves responding to open issues, reviewing pull requests, performing maintenance, and occasionally adding new features.
Development efforts are mostly directed towards maintenance and cleaning up the codebase to make maintenance easier.
For the time being, this means you're more likely to see bug fixes than new features.
---
**BeEF must be able to hook all web browsers**
If BeEF cannot hook a browser, this is a failure of the project, and should be reviewed as a matter of high priority. There are, however, a few exceptions; specifically [Internet Explorer 5 and earlier](https://github.com/beefproject/beef/issues/1392#issuecomment-462132486).
If functionality within the framework is preventing hooking, it will disabled by default, until such time as it can be reviewed and fixed, or potentially removed entirely.
**BeEF should "just work"**
All functionality enabled by default should work out of the box.
This includes trying to make BeEF as user-friendly as possible. A tonne of error handling has been added to smooth some of the sharp edges, and warn on incorrect configuration, resulting in intelligible error messages and warnings rather than stack traces. This also ensures we at least have something to work with when issues are posted on the issue tracker.
Several Linux distributions, such as Pentoo, Parrot, and Kali, ship a packaged version of BeEF (thanks!). Where possible, I'd like to make their lives easier. This largely involves keeping dependencies up-to-date and testing to ensure nothing breaks.
| main | continued maintenance and development all beef developers have moved on to other projects there are no active full time beef developers occasionally myself or others will drop by to keep the project alive this involves responding to open issues reviewing pull requests performing maintenance and occasionally adding new features development efforts are mostly directed towards maintenance and cleaning up the codebase to make maintenance easier for the time being this means you re more likely to see bug fixes than new features beef must be able to hook all web browsers if beef cannot hook a browser this is a failure of the project and should be reviewed as a matter of high priority there are however a few exceptions specifically if functionality within the framework is preventing hooking it will disabled by default until such time as it can be reviewed and fixed or potentially removed entirely beef should just work all functionality enabled by default should work out of the box this includes trying to make beef as user friendly as possible a tonne of error handling has been added to smooth some of the sharp edges and warn on incorrect configuration resulting in intelligible error messages and warnings rather than stack traces this also ensures we at least have something to work with when issues are posted on the issue tracker several linux distributions such as pentoo parrot and kali ship a packaged version of beef thanks where possible i d like to make their lives easier this largely involves keeping dependencies up to date and testing to ensure nothing breaks | 1 |
171,145 | 27,068,304,859 | IssuesEvent | 2023-02-14 03:24:50 | bcgov/cas-review | https://api.github.com/repos/bcgov/cas-review | closed | Help brainstorm 'branch inception' | Service Design High Priority Moose | Acceptance Criteria:
- [x] Create facilitation plan for branch inception brainstorming on Jan. 30
- [x] Hold Jan. 30 session w/MS
- [x] Transcribe whiteboard to Miro
- [x] Document next steps
| 1.0 | Help brainstorm 'branch inception' - Acceptance Criteria:
- [x] Create facilitation plan for branch inception brainstorming on Jan. 30
- [x] Hold Jan. 30 session w/MS
- [x] Transcribe whiteboard to Miro
- [x] Document next steps
| non_main | help brainstorm branch inception acceptance criteria create facilitation plan for branch inception brainstorming on jan hold jan session w ms transcribe whiteboard to miro document next steps | 0 |
2,167 | 7,573,198,930 | IssuesEvent | 2018-04-23 17:01:47 | chocolatey/chocolatey-package-requests | https://api.github.com/repos/chocolatey/chocolatey-package-requests | reopened | RFM - Goodsync | Status: Available For Maintainer(s) | https://chocolatey.org/packages/GoodSync
Package currently installs the old v9, not the new v10. All that needs to happen (it seems) is to update the url
v9: https://www.goodsync.com/download/GoodSync-Setup.msi
v10: https://www.goodsync.com/download/GoodSync-v10-Setup.msi
Package-maintainer is unresponsive, and it sounds like an issue like this is the way forward. | True | RFM - Goodsync - https://chocolatey.org/packages/GoodSync
Package currently installs the old v9, not the new v10. All that needs to happen (it seems) is to update the url
v9: https://www.goodsync.com/download/GoodSync-Setup.msi
v10: https://www.goodsync.com/download/GoodSync-v10-Setup.msi
Package-maintainer is unresponsive, and it sounds like an issue like this is the way forward. | main | rfm goodsync package currently installs the old not the new all that needs to happen it seems is to update the url package maintainer is unresponsive and it sounds like an issue like this is the way forward | 1 |
473,410 | 13,641,970,239 | IssuesEvent | 2020-09-25 14:52:58 | projectcontour/contour | https://api.github.com/repos/projectcontour/contour | closed | Feature request: CORS support | help wanted priority/important-longterm | As far as I can see, in v2 API, it can be implemented only as a http filter: https://github.com/envoyproxy/data-plane-api/blob/master/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto.
In order to have secure enough defaults I think this should be disabled by default and let people enable explicitly with a new command-line parameter.
The implementation is fairly easy I can send a PR if you feel this can get into contour.
WDYT @davecheney?
---
[@rosskukulinski copy/paste] Proposed spec:
[...]
tls:
# required, the name of a secret in the current namespace
secretName: google-tls
# other properties like cipher suites may be added later
# if present describes the CORS policy.
cors:
# Specifies the origins that will be allowed to do CORS requests.
allowOrigin:
- www.google.com
- google.com
# Specifies the content for the *access-control-allow-methods*
# header.
allowMethods:
- GET
- POST
# Specifies the content for the *access-control-allow-headers*
# header.
allowHeaders:
- Content-Type
# Specifies the content for the *access-control-expose-headers*
# header.
exposeHeaders:
- Content-Type
# Specifies the content for the *access-control-max-age* header.
maxAge: 24h
# Specifies whether the resource allows credentials.
allowCredentials: true
strategy: RoundRobin # (Optional) LB Algorithm to apply to all services, defaults for all services
[...] | 1.0 | Feature request: CORS support - As far as I can see, in v2 API, it can be implemented only as a http filter: https://github.com/envoyproxy/data-plane-api/blob/master/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto.
In order to have secure enough defaults I think this should be disabled by default and let people enable explicitly with a new command-line parameter.
The implementation is fairly easy I can send a PR if you feel this can get into contour.
WDYT @davecheney?
---
[@rosskukulinski copy/paste] Proposed spec:
[...]
tls:
# required, the name of a secret in the current namespace
secretName: google-tls
# other properties like cipher suites may be added later
# if present describes the CORS policy.
cors:
# Specifies the origins that will be allowed to do CORS requests.
allowOrigin:
- www.google.com
- google.com
# Specifies the content for the *access-control-allow-methods*
# header.
allowMethods:
- GET
- POST
# Specifies the content for the *access-control-allow-headers*
# header.
allowHeaders:
- Content-Type
# Specifies the content for the *access-control-expose-headers*
# header.
exposeHeaders:
- Content-Type
# Specifies the content for the *access-control-max-age* header.
maxAge: 24h
# Specifies whether the resource allows credentials.
allowCredentials: true
strategy: RoundRobin # (Optional) LB Algorithm to apply to all services, defaults for all services
[...] | non_main | feature request cors support as far as i can see in api it can be implemented only as a http filter in order to have secure enough defaults i think this should be disabled by default and let people enable explicitly with a new command line parameter the implementation is fairly easy i can send a pr if you feel this can get into contour wdyt davecheney proposed spec tls required the name of a secret in the current namespace secretname google tls other properties like cipher suites may be added later if present describes the cors policy cors specifies the origins that will be allowed to do cors requests alloworigin google com specifies the content for the access control allow methods header allowmethods get post specifies the content for the access control allow headers header allowheaders content type specifies the content for the access control expose headers header exposeheaders content type specifies the content for the access control max age header maxage specifies whether the resource allows credentials allowcredentials true strategy roundrobin optional lb algorithm to apply to all services defaults for all services | 0 |
1,004 | 4,772,026,973 | IssuesEvent | 2016-10-26 19:39:56 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | maven_artifact cannot download sbt published artifact | affects_2.0 bug_report waiting_on_maintainer | ##### Issue Type:
Please pick one and delete the rest:
- Bug Report
##### Plugin Name:
maven_artifact
##### Ansible Version:
```
ansible 2.0.0.2
config file = /cygdrive/c/Users/.../MyProject/ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
##### Environment:
N/A
##### Summary:
Unable to download a snapshot artifact from our company maven repo (Sonatype Nexus). The artifact was published with sbt and setting `publishMavenStyle := true`.
When publishing with sbt, it uploads an artifact with a "SNAPSHOT" suffix rather than a timestamp-build number suffix.
Sample maven-metadata.xml:
```
<metadata modelVersion="1.1.0">
<groupId>[some-group-id]</groupId>
<artifactId>[some-artifact-id]</artifactId>
<version>0.1-SNAPSHOT</version>
<versioning>
<lastUpdated>20160223105322</lastUpdated>
</versioning>
</metadata>
```
##### Steps To Reproduce:
Use maven-artifact to download any snapshot artifact from any maven repo.
##### Expected Results:
Artifact downloaded
##### Actual Results:
```
TASK [MyProject : Download artifact] ***************************
fatal: [local-dev-mwd]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible-tmp-1456157113.39-94202179773144/maven_artifact\", line 3089, in <module>\r\n main()\r\n File \"/tmp/ansible-tmp-1456157113.39-94202179773144/maven_artifact\", line 346, in main\r\n if downloader.download(artifact, dest):\r\n File \"/tmp/ansible-tmp-1456157113.39-94202179773144/maven_artifact\", line 237, in download\r\n url = self.find_uri_for_artifact(artifact)\r\n File \"/tmp/ansible-tmp-1456157113.39-94202179773144/maven_artifact\", line 202, in find_uri_for_artifact\r\n timestamp = xml.xpath(\"/metadata/versioning/snapshot/timestamp/text()\")[0]\r\nIndexError: list index out of range\r\n", "msg": "MODULE FAILURE", "parsed": false}
```
| True | maven_artifact cannot download sbt published artifact - ##### Issue Type:
Please pick one and delete the rest:
- Bug Report
##### Plugin Name:
maven_artifact
##### Ansible Version:
```
ansible 2.0.0.2
config file = /cygdrive/c/Users/.../MyProject/ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
##### Environment:
N/A
##### Summary:
Unable to download a snapshot artifact from our company maven repo (Sonatype Nexus). The artifact was published with sbt and setting `publishMavenStyle := true`.
When publishing with sbt, it uploads an artifact with a "SNAPSHOT" suffix rather than a timestamp-build number suffix.
Sample maven-metadata.xml:
```
<metadata modelVersion="1.1.0">
<groupId>[some-group-id]</groupId>
<artifactId>[some-artifact-id]</artifactId>
<version>0.1-SNAPSHOT</version>
<versioning>
<lastUpdated>20160223105322</lastUpdated>
</versioning>
</metadata>
```
##### Steps To Reproduce:
Use maven-artifact to download any snapshot artifact from any maven repo.
##### Expected Results:
Artifact downloaded
##### Actual Results:
```
TASK [MyProject : Download artifact] ***************************
fatal: [local-dev-mwd]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible-tmp-1456157113.39-94202179773144/maven_artifact\", line 3089, in <module>\r\n main()\r\n File \"/tmp/ansible-tmp-1456157113.39-94202179773144/maven_artifact\", line 346, in main\r\n if downloader.download(artifact, dest):\r\n File \"/tmp/ansible-tmp-1456157113.39-94202179773144/maven_artifact\", line 237, in download\r\n url = self.find_uri_for_artifact(artifact)\r\n File \"/tmp/ansible-tmp-1456157113.39-94202179773144/maven_artifact\", line 202, in find_uri_for_artifact\r\n timestamp = xml.xpath(\"/metadata/versioning/snapshot/timestamp/text()\")[0]\r\nIndexError: list index out of range\r\n", "msg": "MODULE FAILURE", "parsed": false}
```
| main | maven artifact cannot download sbt published artifact issue type please pick one and delete the rest bug report plugin name maven artifact ansible version ansible config file cygdrive c users myproject ansible cfg configured module search path default w o overrides ansible configuration environment n a summary unable to download a snapshot artifact from our company maven repo sonatype nexus the artifact was published with sbt and setting publishmavenstyle true when publishing with sbt it uploads an artifact with a snapshot suffix rather than a timestamp build number suffix sample maven metadata xml snapshot steps to reproduce use maven artifact to download any snapshot artifact from any maven repo expected results artifact downloaded actual results task fatal failed changed false failed true module stderr module stdout traceback most recent call last r n file tmp ansible tmp maven artifact line in r n main r n file tmp ansible tmp maven artifact line in main r n if downloader download artifact dest r n file tmp ansible tmp maven artifact line in download r n url self find uri for artifact artifact r n file tmp ansible tmp maven artifact line in find uri for artifact r n timestamp xml xpath metadata versioning snapshot timestamp text r nindexerror list index out of range r n msg module failure parsed false | 1 |
769 | 4,381,173,587 | IssuesEvent | 2016-08-06 02:55:34 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | ec2_remote_facts throws "unsupported parameter for module: aws_secret_key" | aws bug_report cloud waiting_on_maintainer | - name: Find EC2 instances
ec2_remote_facts:
key: Name
value: "{{ aws_project_name_env_branch }}"
region: "{{ aws_region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
register: ec2_instances
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "unsupported parameter for module: aws_secret_key"} | True | ec2_remote_facts throws "unsupported parameter for module: aws_secret_key" - - name: Find EC2 instances
ec2_remote_facts:
key: Name
value: "{{ aws_project_name_env_branch }}"
region: "{{ aws_region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
register: ec2_instances
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "unsupported parameter for module: aws_secret_key"} | main | remote facts throws unsupported parameter for module aws secret key name find instances remote facts key name value aws project name env branch region aws region aws access key aws access key aws secret key aws secret key register instances fatal failed changed false failed true msg unsupported parameter for module aws secret key | 1 |
1,313 | 5,559,438,429 | IssuesEvent | 2017-03-24 16:56:30 | WhitestormJS/whitestorm.js | https://api.github.com/repos/WhitestormJS/whitestorm.js | closed | Update CONTRIBUTING.md guide | FIXME MAINTAINANCE | Some use cases of [`CONTRIBUTING.md`](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md) file are deprecated since of refactoring changes in V2 and we should update them:
- [x] [**CLI**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#cli)
- [x] Fix commands, add new ones.
- [x] Force the use of **npm commands**
- [x] [**Commit names**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#commiting)
- [x] New rule: no dot after short codes
- Use of **WIP** is now undesirable.
- [x] [**Changelog**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#-adding-changes-to-changelogmd) - is now deprecated. Use [github releases](https://github.com/WhitestormJS/whitestorm.js/releases)
###### Version:
- [x] v2.x.x
- [ ] v1.x.x
###### Issue type:
- [ ] Bug
- [x] Proposal/Enhancement
- [ ] Question
------
<details>
<summary> <b>Tested on: </b> </summary>
###### --- Desktop
- [ ] Chrome
- [ ] Chrome Canary
- [ ] Chrome dev-channel
- [ ] Firefox
- [ ] Opera
- [ ] Microsoft IE
- [ ] Microsoft Edge
###### --- Android
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
###### --- IOS
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
</details>
| True | Update CONTRIBUTING.md guide - Some use cases of [`CONTRIBUTING.md`](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md) file are deprecated since of refactoring changes in V2 and we should update them:
- [x] [**CLI**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#cli)
- [x] Fix commands, add new ones.
- [x] Force the use of **npm commands**
- [x] [**Commit names**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#commiting)
- [x] New rule: no dot after short codes
- Use of **WIP** is now undesirable.
- [x] [**Changelog**](https://github.com/WhitestormJS/whitestorm.js/blob/beta/.github/CONTRIBUTING.md#-adding-changes-to-changelogmd) - is now deprecated. Use [github releases](https://github.com/WhitestormJS/whitestorm.js/releases)
###### Version:
- [x] v2.x.x
- [ ] v1.x.x
###### Issue type:
- [ ] Bug
- [x] Proposal/Enhancement
- [ ] Question
------
<details>
<summary> <b>Tested on: </b> </summary>
###### --- Desktop
- [ ] Chrome
- [ ] Chrome Canary
- [ ] Chrome dev-channel
- [ ] Firefox
- [ ] Opera
- [ ] Microsoft IE
- [ ] Microsoft Edge
###### --- Android
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
###### --- IOS
- [ ] Chrome
- [ ] Firefox
- [ ] Opera
</details>
| main | update contributing md guide some use cases of file are deprecated since of refactoring changes in and we should update them fix commands add new ones force the use of npm commands new rule no dot after short codes use of wip is now undesirable is now deprecated use version x x x x issue type bug proposal enhancement question tested on desktop chrome chrome canary chrome dev channel firefox opera microsoft ie microsoft edge android chrome firefox opera ios chrome firefox opera | 1 |
4,824 | 24,860,280,422 | IssuesEvent | 2022-10-27 07:42:29 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | AttributeError from columns endpoint | type: bug work: backend status: ready restricted: maintainers | I started getting this error from the columns endpoint and I'm not sure how. I would only get the error when requesting the columns for one table. Other tables were fine. But after I started getting this error, I got it consistently until I restarted Docker. After restarting Docker I don't see the error at all anymore.
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/2/columns/?limit=500
Django Version: 3.1.14
Python Version: 3.9.9
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/code/mathesar/models/base.py", line 675, in __getattribute__
return super().__getattribute__(name)
During handling of the above exception ('Column' object has no attribute 'valid_target_types'), another exception occurred:
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 826, in __getattr__
return getattr(self.comparator, key)
The above exception ('Comparator' object has no attribute 'valid_target_types') was the direct cause of the following exception:
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 43, in list
return self.get_paginated_response(serializer.data)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 745, in data
ret = super().data
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 246, in data
self._data = self.to_representation(self.instance)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 663, in to_representation
return [
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 664, in <listcomp>
self.child.to_representation(item) for item in iterable
File "/code/mathesar/api/serializers/columns.py", line 84, in to_representation
representation = super().to_representation(instance)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 515, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "/usr/local/lib/python3.9/site-packages/rest_framework/fields.py", line 1882, in to_representation
return method(value)
File "/code/mathesar/api/serializers/columns.py", line 204, in get_valid_target_types
valid_target_types = column.valid_target_types
File "/code/mathesar/models/base.py", line 681, in __getattribute__
return getattr(self._sa_column, name)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 828, in __getattr__
util.raise_(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
Exception Type: AttributeError at /api/db/v0/tables/2/columns/
Exception Value: Neither 'Column' object nor 'Comparator' object has an attribute 'valid_target_types'
```
</details>
CC @mathemancer
| True | AttributeError from columns endpoint - I started getting this error from the columns endpoint and I'm not sure how. I would only get the error when requesting the columns for one table. Other tables were fine. But after I started getting this error, I got it consistently until I restarted Docker. After restarting Docker I don't see the error at all anymore.
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/db/v0/tables/2/columns/?limit=500
Django Version: 3.1.14
Python Version: 3.9.9
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Traceback (most recent call last):
File "/code/mathesar/models/base.py", line 675, in __getattribute__
return super().__getattribute__(name)
During handling of the above exception ('Column' object has no attribute 'valid_target_types'), another exception occurred:
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 826, in __getattr__
return getattr(self.comparator, key)
The above exception ('Comparator' object has no attribute 'valid_target_types') was the direct cause of the following exception:
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 55, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/mixins.py", line 43, in list
return self.get_paginated_response(serializer.data)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 745, in data
ret = super().data
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 246, in data
self._data = self.to_representation(self.instance)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 663, in to_representation
return [
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 664, in <listcomp>
self.child.to_representation(item) for item in iterable
File "/code/mathesar/api/serializers/columns.py", line 84, in to_representation
representation = super().to_representation(instance)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 515, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "/usr/local/lib/python3.9/site-packages/rest_framework/fields.py", line 1882, in to_representation
return method(value)
File "/code/mathesar/api/serializers/columns.py", line 204, in get_valid_target_types
valid_target_types = column.valid_target_types
File "/code/mathesar/models/base.py", line 681, in __getattribute__
return getattr(self._sa_column, name)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 828, in __getattr__
util.raise_(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
Exception Type: AttributeError at /api/db/v0/tables/2/columns/
Exception Value: Neither 'Column' object nor 'Comparator' object has an attribute 'valid_target_types'
```
</details>
CC @mathemancer
| main | attributeerror from columns endpoint i started getting this error from the columns endpoint and i m not sure how i would only get the error when requesting the columns for one table other tables were fine but after i started getting this error i got it consistently until i restarted docker after restarting docker i don t see the error at all anymore traceback environment request method get request url django version python version installed applications django contrib admin django contrib auth django contrib contenttypes django contrib sessions django contrib messages django contrib staticfiles rest framework django filters django property filter mathesar installed middleware django middleware security securitymiddleware django contrib sessions middleware sessionmiddleware django middleware common commonmiddleware django middleware csrf csrfviewmiddleware django contrib auth middleware authenticationmiddleware django contrib messages middleware messagemiddleware django middleware clickjacking xframeoptionsmiddleware traceback most recent call last file code mathesar models base py line in getattribute return super getattribute name during handling of the above exception column object has no attribute valid target types another exception occurred file usr local lib site packages sqlalchemy sql elements py line in getattr return getattr self comparator key the above exception comparator object has no attribute valid target types was the direct cause of the following exception file usr local lib site packages django core handlers exception py line in inner response get response request file usr local lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file usr local lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file usr local lib site packages rest framework viewsets py line in view return self dispatch request args kwargs file usr local lib site packages rest framework views py line in dispatch response self handle exception exc file usr local lib site packages rest framework views py line in handle exception response exception handler exc context file code mathesar exception handlers py line in mathesar exception handler raise exc file usr local lib site packages rest framework views py line in dispatch response handler request args kwargs file usr local lib site packages rest framework mixins py line in list return self get paginated response serializer data file usr local lib site packages rest framework serializers py line in data ret super data file usr local lib site packages rest framework serializers py line in data self data self to representation self instance file usr local lib site packages rest framework serializers py line in to representation return file usr local lib site packages rest framework serializers py line in self child to representation item for item in iterable file code mathesar api serializers columns py line in to representation representation super to representation instance file usr local lib site packages rest framework serializers py line in to representation ret field to representation attribute file usr local lib site packages rest framework fields py line in to representation return method value file code mathesar api serializers columns py line in get valid target types valid target types column valid target types file code mathesar models base py line in getattribute return getattr self sa column name file usr local lib site packages sqlalchemy sql elements py line in getattr util raise file usr local lib site packages sqlalchemy util compat py line in raise raise exception exception type attributeerror at api db tables columns exception value neither column object nor comparator object has an attribute valid target types cc mathemancer | 1 |
5,732 | 30,314,203,480 | IssuesEvent | 2023-07-10 14:35:25 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | opened | Time cell not saved after pressing Tab key | type: bug work: frontend status: ready restricted: maintainers | Something is very weird here. Sometimes I'm able to reproduce this. Sometimes not. I've played with it a bunch and I've not been able to identify anything _I'm doing_ differently when I'm able to reproduce it vs not. I'm not sure if it has to do with <kbd>Tab</kbd> or not. I'm not sure it's specific to the Time UI type or other types too. Maybe it only happens when the cell is in the last column or last row. I don't know. I've not poked at it very deeply, beyond just attempting to reproduce it _consistently_, which so far I've failed to do.
## Steps to reproduce
1. Edit the value of a NULL time cell.
1. Press <kbd>Tab</kbd>.
1. Expect the value to save.
1. Instead, observe that the cell exits edit mode without saving the value. This is bad because the user might think the value is saved!
https://github.com/centerofci/mathesar/assets/42411/bbb05ae2-09c8-4faa-8d49-8bc2338f5241
| True | Time cell not saved after pressing Tab key - Something is very weird here. Sometimes I'm able to reproduce this. Sometimes not. I've played with it a bunch and I've not been able to identify anything _I'm doing_ differently when I'm able to reproduce it vs not. I'm not sure if it has to do with <kbd>Tab</kbd> or not. I'm not sure it's specific to the Time UI type or other types too. Maybe it only happens when the cell is in the last column or last row. I don't know. I've not poked at it very deeply, beyond just attempting to reproduce it _consistently_, which so far I've failed to do.
## Steps to reproduce
1. Edit the value of a NULL time cell.
1. Press <kbd>Tab</kbd>.
1. Expect the value to save.
1. Instead, observe that the cell exits edit mode without saving the value. This is bad because the user might think the value is saved!
https://github.com/centerofci/mathesar/assets/42411/bbb05ae2-09c8-4faa-8d49-8bc2338f5241
| main | time cell not saved after pressing tab key something is very weird here sometimes i m able to reproduce this sometimes not i ve played with it a bunch and i ve not been able to identify anything i m doing differently when i m able to reproduce it vs not i m not sure if it has to do with tab or not i m not sure it s specific to the time ui type or other types too maybe it only happens when the cell is in the last column or last row i don t know i ve not poked at it very deeply beyond just attempting to reproduce it consistently which so far i ve failed to do steps to reproduce edit the value of a null time cell press tab expect the value to save instead observe that the cell exits edit mode without saving the value this is bad because the user might think the value is saved | 1 |
1,646 | 6,572,672,799 | IssuesEvent | 2017-09-11 04:17:30 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Check that archives are not created within paths to be removed is unreliable | affects_2.2 bug_report waiting_on_maintainer | <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
archive
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
N/A
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
When creating an archive with remove=True the archive module checks that the archive is not created within the paths to be removed.
Since the check is done with a simple `dest.startswith(path)`, it can happen that the check reports that the archive destination is in path when `path` is a prefix of `dest`, but they are not on the same path. E.g. `path=/tmp/test` and `dest=/tmp/test.tar.gz`, here `dest` is outside of `path`, but the check would still report that the archive would be created in a path to be removed.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
The following playbook contains two tasks. The first will work because `path` is not a prefix of `dest`, the second will fail because `path` is a prefix of `dest`.
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Test archive
hosts: localhost
tasks:
- name: This will work
archive:
path: /tmp/test1/
dest: /tmp/test1.tar.gz
remove: True
- name: This will fail
archive:
path: /tmp/test2
dest: /tmp/test2.tar.gz
remove: True
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
ansible-playbook test.yml -vvvv
No config file found; using defaults
[WARNING]: Host file not found: /usr/local/etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
Loading callback plugin default of type stdout, v2.0 from /usr/local/Cellar/ansible/2.2.0.0_1/libexec/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: test.yml *************************************************************
1 plays in test.yml
PLAY [Test archive] ************************************************************
TASK [setup] *******************************************************************
Using module file /usr/local/Cellar/ansible/2.2.0.0_1/libexec/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dgonzalez
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295 `" && echo ansible-tmp-1479983438.63-62447650972295="` echo $HOME/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/_q/mrdgcdd124qf1ry_xbhzwt_x_yl1nx/T/tmp1g_Bzg TO /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295/ /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/Cellar/ansible/2.2.0.0_1/libexec/bin/python2.7 /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295/setup.py; rm -rf "/Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295/" > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [This will work] **********************************************************
task path: /Users/dgonzalez/Documents/Thesis/repo/lab/ansible/playbooks/test.yml:5
Using module file /Users/dgonzalez/Documents/Thesis/repo/lab/ansible/playbooks/library/archive.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dgonzalez
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929 `" && echo ansible-tmp-1479983439.38-146539138486929="` echo $HOME/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/_q/mrdgcdd124qf1ry_xbhzwt_x_yl1nx/T/tmpKGqjYA TO /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929/archive.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929/ /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929/archive.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/Cellar/ansible/2.2.0.0_1/libexec/bin/python2.7 /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929/archive.py; rm -rf "/Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"archived": [],
"arcroot": "/tmp/test1/",
"changed": true,
"dest": "/tmp/test1.tar.gz",
"expanded_paths": [
"/tmp/test1/"
],
"gid": 0,
"group": "wheel",
"invocation": {
"module_args": {
"backup": null,
"content": null,
"delimiter": null,
"dest": "/tmp/test1.tar.gz",
"directory_mode": null,
"follow": false,
"force": null,
"format": "gz",
"group": null,
"mode": null,
"owner": null,
"path": "/tmp/test1.tar.gz",
"regexp": null,
"remote_src": null,
"remove": true,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"unsafe_writes": null
},
"module_name": "archive"
},
"missing": [],
"mode": "0644",
"owner": "dgonzalez",
"size": 60,
"state": "file",
"uid": 1441367741
}
TASK [This will fail] **********************************************************
task path: /Users/dgonzalez/Documents/Thesis/repo/lab/ansible/playbooks/test.yml:10
Using module file /Users/dgonzalez/Documents/Thesis/repo/lab/ansible/playbooks/library/archive.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dgonzalez
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278 `" && echo ansible-tmp-1479983440.0-114483211257278="` echo $HOME/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/_q/mrdgcdd124qf1ry_xbhzwt_x_yl1nx/T/tmpWLomP4 TO /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278/archive.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278/ /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278/archive.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/Cellar/ansible/2.2.0.0_1/libexec/bin/python2.7 /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278/archive.py; rm -rf "/Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"gid": 0,
"group": "wheel",
"invocation": {
"module_args": {
"backup": null,
"content": null,
"delimiter": null,
"dest": "/tmp/test2.tar.gz",
"directory_mode": null,
"follow": false,
"force": null,
"format": "gz",
"group": null,
"mode": null,
"owner": null,
"path": [
"/tmp/test2"
],
"regexp": null,
"remote_src": null,
"remove": true,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"unsafe_writes": null
},
"module_name": "archive"
},
"mode": "0755",
"msg": "Error, created archive can not be contained in source paths when remove=True",
"owner": "dgonzalez",
"path": "/tmp/test2",
"size": 68,
"state": "directory",
"uid": 1441367741
}
to retry, use: --limit @/Users/dgonzalez/Documents/Thesis/repo/lab/ansible/playbooks/test.retry
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1
```
| True | Check that archives are not created within paths to be removed is unreliable - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
archive
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.0.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
N/A
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
When creating an archive with remove=True the archive module checks that the archive is not created within the paths to be removed.
Since the check is done with a simple `dest.startswith(path)`, it can happen that the check reports that the archive destination is in path when `path` is a prefix of `dest`, but they are not on the same path. E.g. `path=/tmp/test` and `dest=/tmp/test.tar.gz`, here `dest` is outside of `path`, but the check would still report that the archive would be created in a path to be removed.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
The following playbook contains two tasks. The first will work because `path` is not a prefix of `dest`, the second will fail because `path` is a prefix of `dest`.
<!--- Paste example playbooks or commands between quotes below -->
```
---
- name: Test archive
hosts: localhost
tasks:
- name: This will work
archive:
path: /tmp/test1/
dest: /tmp/test1.tar.gz
remove: True
- name: This will fail
archive:
path: /tmp/test2
dest: /tmp/test2.tar.gz
remove: True
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
ansible-playbook test.yml -vvvv
No config file found; using defaults
[WARNING]: Host file not found: /usr/local/etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
Loading callback plugin default of type stdout, v2.0 from /usr/local/Cellar/ansible/2.2.0.0_1/libexec/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: test.yml *************************************************************
1 plays in test.yml
PLAY [Test archive] ************************************************************
TASK [setup] *******************************************************************
Using module file /usr/local/Cellar/ansible/2.2.0.0_1/libexec/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dgonzalez
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295 `" && echo ansible-tmp-1479983438.63-62447650972295="` echo $HOME/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/_q/mrdgcdd124qf1ry_xbhzwt_x_yl1nx/T/tmp1g_Bzg TO /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295/setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295/ /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295/setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/Cellar/ansible/2.2.0.0_1/libexec/bin/python2.7 /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295/setup.py; rm -rf "/Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983438.63-62447650972295/" > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [This will work] **********************************************************
task path: /Users/dgonzalez/Documents/Thesis/repo/lab/ansible/playbooks/test.yml:5
Using module file /Users/dgonzalez/Documents/Thesis/repo/lab/ansible/playbooks/library/archive.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dgonzalez
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929 `" && echo ansible-tmp-1479983439.38-146539138486929="` echo $HOME/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/_q/mrdgcdd124qf1ry_xbhzwt_x_yl1nx/T/tmpKGqjYA TO /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929/archive.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929/ /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929/archive.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/Cellar/ansible/2.2.0.0_1/libexec/bin/python2.7 /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929/archive.py; rm -rf "/Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983439.38-146539138486929/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"archived": [],
"arcroot": "/tmp/test1/",
"changed": true,
"dest": "/tmp/test1.tar.gz",
"expanded_paths": [
"/tmp/test1/"
],
"gid": 0,
"group": "wheel",
"invocation": {
"module_args": {
"backup": null,
"content": null,
"delimiter": null,
"dest": "/tmp/test1.tar.gz",
"directory_mode": null,
"follow": false,
"force": null,
"format": "gz",
"group": null,
"mode": null,
"owner": null,
"path": "/tmp/test1.tar.gz",
"regexp": null,
"remote_src": null,
"remove": true,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"unsafe_writes": null
},
"module_name": "archive"
},
"missing": [],
"mode": "0644",
"owner": "dgonzalez",
"size": 60,
"state": "file",
"uid": 1441367741
}
TASK [This will fail] **********************************************************
task path: /Users/dgonzalez/Documents/Thesis/repo/lab/ansible/playbooks/test.yml:10
Using module file /Users/dgonzalez/Documents/Thesis/repo/lab/ansible/playbooks/library/archive.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dgonzalez
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278 `" && echo ansible-tmp-1479983440.0-114483211257278="` echo $HOME/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278 `" ) && sleep 0'
<127.0.0.1> PUT /var/folders/_q/mrdgcdd124qf1ry_xbhzwt_x_yl1nx/T/tmpWLomP4 TO /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278/archive.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278/ /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278/archive.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/local/Cellar/ansible/2.2.0.0_1/libexec/bin/python2.7 /Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278/archive.py; rm -rf "/Users/dgonzalez/.ansible/tmp/ansible-tmp-1479983440.0-114483211257278/" > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"gid": 0,
"group": "wheel",
"invocation": {
"module_args": {
"backup": null,
"content": null,
"delimiter": null,
"dest": "/tmp/test2.tar.gz",
"directory_mode": null,
"follow": false,
"force": null,
"format": "gz",
"group": null,
"mode": null,
"owner": null,
"path": [
"/tmp/test2"
],
"regexp": null,
"remote_src": null,
"remove": true,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"unsafe_writes": null
},
"module_name": "archive"
},
"mode": "0755",
"msg": "Error, created archive can not be contained in source paths when remove=True",
"owner": "dgonzalez",
"path": "/tmp/test2",
"size": 68,
"state": "directory",
"uid": 1441367741
}
to retry, use: --limit @/Users/dgonzalez/Documents/Thesis/repo/lab/ansible/playbooks/test.retry
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1
```
| main | check that archives are not created within paths to be removed is unreliable issue type bug report component name archive ansible version ansible config file configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables n a os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific n a summary when creating an archive with remove true the archive module checks that the archive is not created within the paths to be removed since the check is done with a simple dest startswith path it can happen that the check reports that the archive destination is in path when path is a prefix of dest but they are not on the same path e g path tmp test and dest tmp test tar gz here dest is outside of path but the check would still report that the archive would be created in a path to be removed steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used the following playbook contains two tasks the first will work because path is not a prefix of dest the second will fail because path is a prefix of dest name test archive hosts localhost tasks name this will work archive path tmp dest tmp tar gz remove true name this will fail archive path tmp dest tmp tar gz remove true expected results actual results ansible playbook test yml vvvv no config file found using defaults host file not found usr local etc ansible hosts provided hosts list is empty only localhost is available loading callback plugin default of type stdout from usr local cellar ansible libexec lib site packages ansible plugins callback init pyc playbook test yml plays in test yml play task using module file usr local cellar ansible libexec lib site packages ansible modules core system setup py establish local connection for user dgonzalez exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders q xbhzwt x t bzg to users dgonzalez ansible tmp ansible tmp setup py exec bin sh c chmod u x users dgonzalez ansible tmp ansible tmp users dgonzalez ansible tmp ansible tmp setup py sleep exec bin sh c usr local cellar ansible libexec bin users dgonzalez ansible tmp ansible tmp setup py rm rf users dgonzalez ansible tmp ansible tmp dev null sleep ok task task path users dgonzalez documents thesis repo lab ansible playbooks test yml using module file users dgonzalez documents thesis repo lab ansible playbooks library archive py establish local connection for user dgonzalez exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders q xbhzwt x t tmpkgqjya to users dgonzalez ansible tmp ansible tmp archive py exec bin sh c chmod u x users dgonzalez ansible tmp ansible tmp users dgonzalez ansible tmp ansible tmp archive py sleep exec bin sh c usr local cellar ansible libexec bin users dgonzalez ansible tmp ansible tmp archive py rm rf users dgonzalez ansible tmp ansible tmp dev null sleep changed archived arcroot tmp changed true dest tmp tar gz expanded paths tmp gid group wheel invocation module args backup null content null delimiter null dest tmp tar gz directory mode null follow false force null format gz group null mode null owner null path tmp tar gz regexp null remote src null remove true selevel null serole null setype null seuser null src null unsafe writes null module name archive missing mode owner dgonzalez size state file uid task task path users dgonzalez documents thesis repo lab ansible playbooks test yml using module file users dgonzalez documents thesis repo lab ansible playbooks library archive py establish local connection for user dgonzalez exec bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put var folders q xbhzwt x t to users dgonzalez ansible tmp ansible tmp archive py exec bin sh c chmod u x users dgonzalez ansible tmp ansible tmp users dgonzalez ansible tmp ansible tmp archive py sleep exec bin sh c usr local cellar ansible libexec bin users dgonzalez ansible tmp ansible tmp archive py rm rf users dgonzalez ansible tmp ansible tmp dev null sleep fatal failed changed false failed true gid group wheel invocation module args backup null content null delimiter null dest tmp tar gz directory mode null follow false force null format gz group null mode null owner null path tmp regexp null remote src null remove true selevel null serole null setype null seuser null src null unsafe writes null module name archive mode msg error created archive can not be contained in source paths when remove true owner dgonzalez path tmp size state directory uid to retry use limit users dgonzalez documents thesis repo lab ansible playbooks test retry play recap localhost ok changed unreachable failed | 1 |
184,737 | 32,037,770,790 | IssuesEvent | 2023-09-22 16:39:46 | DeNhAiKal/INF2001_P5-5 | https://api.github.com/repos/DeNhAiKal/INF2001_P5-5 | closed | UI sketches | documentation Design | <details><summary>Task Title ID</summary>
<p>
#50
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Task Details</summary>
<p>
` Conducted during the team meeting 4. Create a rough sketch of the flow of the website. So that everyone is align and clear of the flow of the website.`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Goals</summary>
<p>
`Able to come up with clear Ui sketches to get rid of any misunderstanding in the team.`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Success - Completion Details:</summary>
<p>
`Clear and precise UI sketches in details(In a image) to be updated in the board`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Allocated time</summary>
<p>
`1 day`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Start Date - End Date</summary>
<p>
`21/09/2023` - `21/09/2023`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Person In-charge</summary>
<p>
- [x] Den
- [ ] Hakim
- [ ] Zac
- [ ] Brendan
- [ ] Rudhi
- [ ] Malcolm
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Task Status</summary>
<p>
`Completed`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Current Priority</summary>
<p>
`Low priority`
</p>
</details> | 1.0 | UI sketches - <details><summary>Task Title ID</summary>
<p>
#50
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Task Details</summary>
<p>
` Conducted during the team meeting 4. Create a rough sketch of the flow of the website. So that everyone is align and clear of the flow of the website.`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Goals</summary>
<p>
`Able to come up with clear Ui sketches to get rid of any misunderstanding in the team.`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Success - Completion Details:</summary>
<p>
`Clear and precise UI sketches in details(In a image) to be updated in the board`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Allocated time</summary>
<p>
`1 day`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Start Date - End Date</summary>
<p>
`21/09/2023` - `21/09/2023`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Person In-charge</summary>
<p>
- [x] Den
- [ ] Hakim
- [ ] Zac
- [ ] Brendan
- [ ] Rudhi
- [ ] Malcolm
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Task Status</summary>
<p>
`Completed`
</p>
</details>
----------------------------------------------------------------------------------------------------------------------------------------------
<details><summary>Current Priority</summary>
<p>
`Low priority`
</p>
</details> | non_main | ui sketches task title id task details conducted during the team meeting create a rough sketch of the flow of the website so that everyone is align and clear of the flow of the website goals able to come up with clear ui sketches to get rid of any misunderstanding in the team success completion details clear and precise ui sketches in details in a image to be updated in the board allocated time day start date end date person in charge den hakim zac brendan rudhi malcolm task status completed current priority low priority | 0 |
5,681 | 29,833,481,463 | IssuesEvent | 2023-06-18 14:55:19 | Windham-High-School/CubeServer | https://api.github.com/repos/Windham-High-School/CubeServer | closed | Auto updates | trivial maintainability | Auto update to latest release on gh
Include systemd service install stuff in tools | True | Auto updates - Auto update to latest release on gh
Include systemd service install stuff in tools | main | auto updates auto update to latest release on gh include systemd service install stuff in tools | 1 |
185,003 | 14,998,099,216 | IssuesEvent | 2021-01-29 17:53:24 | upb-uc4/hlf-chaincode | https://api.github.com/repos/upb-uc4/hlf-chaincode | closed | Update api definition | Hyperledger documentation enhancement | - [x] add insufficient approval error to transaction capable of returning it (e.g. addMatriculationData)
- [x] add error to be returned if a users attempt to approve transactions they are not allowed to approve
- [x] add error to be returned if a users attempt to reject transactions they are not allowed to reject | 1.0 | Update api definition - - [x] add insufficient approval error to transaction capable of returning it (e.g. addMatriculationData)
- [x] add error to be returned if a users attempt to approve transactions they are not allowed to approve
- [x] add error to be returned if a users attempt to reject transactions they are not allowed to reject | non_main | update api definition add insufficient approval error to transaction capable of returning it e g addmatriculationdata add error to be returned if a users attempt to approve transactions they are not allowed to approve add error to be returned if a users attempt to reject transactions they are not allowed to reject | 0 |
777,260 | 27,273,270,447 | IssuesEvent | 2023-02-23 01:10:33 | openmsupply/open-msupply | https://api.github.com/repos/openmsupply/open-msupply | closed | HIV Care & Treatment Enrolment Form - Multiple | programs Priority: High | **Path:** Dispensary > Patients > Add Program: `HIV Care & Treatment`
- [x] `Enrolment date`: unable to edit the enrolment date (default: today's date). User should be able to select a date in the past.

- [x] `Enrolment Patient Id` to be renamed `Program ID`
### **Mother:**
- [x] `Clinic`: field to be removed from the form
#### `Mother's Patient ID`:
- [ ] `HIV Program ID of the patient's mother`. Dependent field: it should only be visible and entered if patient is an infant (less than 2 years old). Since there is only one field in that section, I'd move it to the top section (see below):

`HIV Program ID of the patient's mother` - 2 options:
- user manually enters the mother's program ID
- user can select a Program ID in a Drop Down List that contains all Program ID of patients registered in this clinic (as a way to connect 2 patients records together). The infant's mother is always a registered patient with a HIV program ID.
**Referral Details:**
- [x] `Referral Details` section to be renamed to `Referral or Transfer in Details`
- [x] keep `Referred From:` as it is
- [x] keep `Priori ART:` as it is
- [x] Add `Clinic Transferred From:` (free text field)
- [x] Add `Previous Clinic ART Start Date:` (date picker, can be a date in the past)
- [x] Add `Date Transferred In` (date picker, can be a date in the past)
- [x] Add `Previous Clinic ID` (free text field)
For reference:
<img width="348" alt="Screen Shot 2023-02-08 at 5 24 22 PM" src="https://user-images.githubusercontent.com/74992958/217433165-ebc7470f-12d1-4485-8a3a-61ba48ed92aa.png">
| 1.0 | HIV Care & Treatment Enrolment Form - Multiple - **Path:** Dispensary > Patients > Add Program: `HIV Care & Treatment`
- [x] `Enrolment date`: unable to edit the enrolment date (default: today's date). User should be able to select a date in the past.

- [x] `Enrolment Patient Id` to be renamed `Program ID`
### **Mother:**
- [x] `Clinic`: field to be removed from the form
#### `Mother's Patient ID`:
- [ ] `HIV Program ID of the patient's mother`. Dependent field: it should only be visible and entered if patient is an infant (less than 2 years old). Since there is only one field in that section, I'd move it to the top section (see below):

`HIV Program ID of the patient's mother` - 2 options:
- user manually enters the mother's program ID
- user can select a Program ID in a Drop Down List that contains all Program ID of patients registered in this clinic (as a way to connect 2 patients records together). The infant's mother is always a registered patient with a HIV program ID.
**Referral Details:**
- [x] `Referral Details` section to be renamed to `Referral or Transfer in Details`
- [x] keep `Referred From:` as it is
- [x] keep `Priori ART:` as it is
- [x] Add `Clinic Transferred From:` (free text field)
- [x] Add `Previous Clinic ART Start Date:` (date picker, can be a date in the past)
- [x] Add `Date Transferred In` (date picker, can be a date in the past)
- [x] Add `Previous Clinic ID` (free text field)
For reference:
<img width="348" alt="Screen Shot 2023-02-08 at 5 24 22 PM" src="https://user-images.githubusercontent.com/74992958/217433165-ebc7470f-12d1-4485-8a3a-61ba48ed92aa.png">
| non_main | hiv care treatment enrolment form multiple path dispensary patients add program hiv care treatment enrolment date unable to edit the enrolment date default today s date user should be able to select a date in the past enrolment patient id to be renamed program id mother clinic field to be removed from the form mother s patient id hiv program id of the patient s mother dependent field it should only be visible and entered if patient is an infant less than years old since there is only one field in that section i d move it to the top section see below hiv program id of the patient s mother options user manually enters the mother s program id user can select a program id in a drop down list that contains all program id of patients registered in this clinic as a way to connect patients records together the infant s mother is always a registered patient with a hiv program id referral details referral details section to be renamed to referral or transfer in details keep referred from as it is keep priori art as it is add clinic transferred from free text field add previous clinic art start date date picker can be a date in the past add date transferred in date picker can be a date in the past add previous clinic id free text field for reference img width alt screen shot at pm src | 0 |
2,894 | 10,319,652,371 | IssuesEvent | 2019-08-30 18:08:02 | backdrop-ops/contrib | https://api.github.com/repos/backdrop-ops/contrib | closed | Joining Request | Maintainer application | Hi Team,
I am a Drupal developer and developed the drupal contrib modules. I would like to join Backdrop community. My drupal profile - https://www.drupal.org/u/iyyappangovind-0 | True | Joining Request - Hi Team,
I am a Drupal developer and developed the drupal contrib modules. I would like to join Backdrop community. My drupal profile - https://www.drupal.org/u/iyyappangovind-0 | main | joining request hi team i am a drupal developer and developed the drupal contrib modules i would like to join backdrop community my drupal profile | 1 |
268,864 | 28,932,230,627 | IssuesEvent | 2023-05-09 01:04:32 | adamcarr1976/MailHog-UI | https://api.github.com/repos/adamcarr1976/MailHog-UI | opened | CVE-2023-26116 (High) detected in angular-1.3.8.js | Mend: dependency security vulnerability | ## CVE-2023-26116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.3.8.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.3.8/angular.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.3.8/angular.js</a></p>
<p>Path to vulnerable library: /assets/js/angular-1.3.8.js</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.3.8.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/adamcarr1976/MailHog-UI/commit/07c8ab471186779895ff8346d63fb9a13196d182">07c8ab471186779895ff8346d63fb9a13196d182</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of the package angular are vulnerable to Regular Expression Denial of Service (ReDoS) via the angular.copy() utility function due to the usage of an insecure regular expression. Exploiting this vulnerability is possible by a large carefully-crafted input, which can result in catastrophic backtracking.
<p>Publish Date: 2023-03-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-26116>CVE-2023-26116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2023-26116 (High) detected in angular-1.3.8.js - ## CVE-2023-26116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>angular-1.3.8.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.3.8/angular.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.3.8/angular.js</a></p>
<p>Path to vulnerable library: /assets/js/angular-1.3.8.js</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.3.8.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/adamcarr1976/MailHog-UI/commit/07c8ab471186779895ff8346d63fb9a13196d182">07c8ab471186779895ff8346d63fb9a13196d182</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of the package angular are vulnerable to Regular Expression Denial of Service (ReDoS) via the angular.copy() utility function due to the usage of an insecure regular expression. Exploiting this vulnerability is possible by a large carefully-crafted input, which can result in catastrophic backtracking.
<p>Publish Date: 2023-03-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-26116>CVE-2023-26116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_main | cve high detected in angular js cve high severity vulnerability vulnerable library angular js angularjs is an mvc framework for building web applications the core features include html enhanced with custom component and data binding capabilities dependency injection and strong focus on simplicity testability maintainability and boiler plate reduction library home page a href path to vulnerable library assets js angular js dependency hierarchy x angular js vulnerable library found in head commit a href vulnerability details all versions of the package angular are vulnerable to regular expression denial of service redos via the angular copy utility function due to the usage of an insecure regular expression exploiting this vulnerability is possible by a large carefully crafted input which can result in catastrophic backtracking publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend | 0 |
834 | 4,473,414,498 | IssuesEvent | 2016-08-26 03:50:08 | ansible/ansible-modules-core | https://api.github.com/repos/ansible/ansible-modules-core | closed | problem with hash character (#) in path in fetch module | bug_report waiting_on_maintainer | Issue Type: Bug Report
Ansible Version: ansible 1.9.3 (ansible-1.9.3-2.fc21)
Ansible Configuration: no changes to /etc/ansible/ansible.cfg made
Environment: Fedora 21, x86_64
Summary: When I use hash character (#) in path in fetch module, file is not fetched and checksum mismatch msg is returned instead
Steps To Reproduce:
1) create ~/ansible/ansible-error.yml playbook
---
- hosts: gluster-tst
remote_user: root
tasks:
- name: fetch module tester
fetch: src=/tmp/remote_file.txt dest=~/tmp/#test/ flat=yes
2] populate /tmp/remote_file.txt on remote host(s)
3] run playbook ansible-playbook ~/ansible/ansible-error.yml
PLAY [gluster-tst] ************************************************************
GATHERING FACTS ***************************************************************
ok: [gluster-tst01]
TASK: [fetch module tester] ***************************************************
failed: [gluster-tst01] => {"checksum": null, "dest": "/home/dron/tmp/#test/remote_file.txt", "failed": true, "file": "/tmp/remote_file.txt", "md5sum": null, "remote_checksum": "4fe0b800d221d1a61c44cd81d2975a288ffd22e4", "remote_md5sum": null}
msg: checksum mismatch
Expected Results: fetched file
Actual Results: file is not fetched at all (does not exists locally) and msg: checksum mismatch is returned
| True | problem with hash character (#) in path in fetch module - Issue Type: Bug Report
Ansible Version: ansible 1.9.3 (ansible-1.9.3-2.fc21)
Ansible Configuration: no changes to /etc/ansible/ansible.cfg made
Environment: Fedora 21, x86_64
Summary: When I use hash character (#) in path in fetch module, file is not fetched and checksum mismatch msg is returned instead
Steps To Reproduce:
1) create ~/ansible/ansible-error.yml playbook
---
- hosts: gluster-tst
remote_user: root
tasks:
- name: fetch module tester
fetch: src=/tmp/remote_file.txt dest=~/tmp/#test/ flat=yes
2] populate /tmp/remote_file.txt on remote host(s)
3] run playbook ansible-playbook ~/ansible/ansible-error.yml
PLAY [gluster-tst] ************************************************************
GATHERING FACTS ***************************************************************
ok: [gluster-tst01]
TASK: [fetch module tester] ***************************************************
failed: [gluster-tst01] => {"checksum": null, "dest": "/home/dron/tmp/#test/remote_file.txt", "failed": true, "file": "/tmp/remote_file.txt", "md5sum": null, "remote_checksum": "4fe0b800d221d1a61c44cd81d2975a288ffd22e4", "remote_md5sum": null}
msg: checksum mismatch
Expected Results: fetched file
Actual Results: file is not fetched at all (does not exists locally) and msg: checksum mismatch is returned
| main | problem with hash character in path in fetch module issue type bug report ansible version ansible ansible ansible configuration no changes to etc ansible ansible cfg made environment fedora summary when i use hash character in path in fetch module file is not fetched and checksum mismatch msg is returned instead steps to reproduce create ansible ansible error yml playbook hosts gluster tst remote user root tasks name fetch module tester fetch src tmp remote file txt dest tmp test flat yes populate tmp remote file txt on remote host s run playbook ansible playbook ansible ansible error yml play gathering facts ok task failed checksum null dest home dron tmp test remote file txt failed true file tmp remote file txt null remote checksum remote null msg checksum mismatch expected results fetched file actual results file is not fetched at all does not exists locally and msg checksum mismatch is returned | 1 |
218,724 | 7,332,276,206 | IssuesEvent | 2018-03-05 15:54:11 | dfci/cidc-devops | https://api.github.com/repos/dfci/cidc-devops | closed | build-services.sh is labeled differently than the other repos | lowpriority | Probably want to have a consistent build file name for all repos. | 1.0 | build-services.sh is labeled differently than the other repos - Probably want to have a consistent build file name for all repos. | non_main | build services sh is labeled differently than the other repos probably want to have a consistent build file name for all repos | 0 |
66,437 | 20,198,302,555 | IssuesEvent | 2022-02-11 12:49:33 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | Dynamic contextmenu items cause null exception in ContextMenu.positionSubmenu | defect | **I'm submitting a ...** (check one with "x")
```
[x ] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
Please demonstrate your case at stackblitz by using the issue template below. Issues without a test case have much less possibility to be reviewd in detail and assisted.
Here is stackblitz with 13 and failing:
https://stackblitz.com/edit/angular-ivy-fa7of2?file=src/app/app.component.html
Here is stackblitz with 10 and working (please ignore url naming):
https://stackblitz.com/edit/angular-10-directive-elementref-example-ws5lg4?file=src/app/app.component.ts
**Current behavior**
<!-- Describe how the bug manifests. -->
Dynamically add items to bound MenuItem[] then use ContextMenu.show(). First time you are able to select the dynamically added menu item. The dynamically added item is removed in the onHide() event. The second time you right click the dynamic item is added again but a null exception occurs. This was working fine with Primeng 9 and 10. When updating to 11 the bug was introduced, and exists in 13.
**Expected behavior**
<!-- Describe what the behavior would be without the bug. -->
Not throw null reference exception.
**Minimal reproduction of the problem with instructions**
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
See above.
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
Windows 10
* **Angular version:** 5.X
<!-- Check whether this is still an issue in the most recent Angular version -->
Bug first appears with Angular 11, but present in 12 and 13.
* **PrimeNG version:** 5.X
<!-- Check whether this is still an issue in the most recent Angular version -->
Bug first appeared with Primeng 11, present in 12 and 13.
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
<!-- All browsers where this could be reproduced -->
Only tested with Chrome Version 98.0.4758.82 on Windows
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
* **Node (for AoT issues):** `node --version` =
With or without IVY.
| 1.0 | Dynamic contextmenu items cause null exception in ContextMenu.positionSubmenu - **I'm submitting a ...** (check one with "x")
```
[x ] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
Please demonstrate your case at stackblitz by using the issue template below. Issues without a test case have much less possibility to be reviewd in detail and assisted.
Here is stackblitz with 13 and failing:
https://stackblitz.com/edit/angular-ivy-fa7of2?file=src/app/app.component.html
Here is stackblitz with 10 and working (please ignore url naming):
https://stackblitz.com/edit/angular-10-directive-elementref-example-ws5lg4?file=src/app/app.component.ts
**Current behavior**
<!-- Describe how the bug manifests. -->
Dynamically add items to bound MenuItem[] then use ContextMenu.show(). First time you are able to select the dynamically added menu item. The dynamically added item is removed in the onHide() event. The second time you right click the dynamic item is added again but a null exception occurs. This was working fine with Primeng 9 and 10. When updating to 11 the bug was introduced, and exists in 13.
**Expected behavior**
<!-- Describe what the behavior would be without the bug. -->
Not throw null reference exception.
**Minimal reproduction of the problem with instructions**
<!--
If the current behavior is a bug or you can illustrate your feature request better with an example,
please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via
https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).
-->
See above.
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
Windows 10
* **Angular version:** 5.X
<!-- Check whether this is still an issue in the most recent Angular version -->
Bug first appears with Angular 11, but present in 12 and 13.
* **PrimeNG version:** 5.X
<!-- Check whether this is still an issue in the most recent Angular version -->
Bug first appeared with Primeng 11, present in 12 and 13.
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
<!-- All browsers where this could be reproduced -->
Only tested with Chrome Version 98.0.4758.82 on Windows
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
* **Node (for AoT issues):** `node --version` =
With or without IVY.
| non_main | dynamic contextmenu items cause null exception in contextmenu positionsubmenu i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports please demonstrate your case at stackblitz by using the issue template below issues without a test case have much less possibility to be reviewd in detail and assisted here is stackblitz with and failing here is stackblitz with and working please ignore url naming current behavior dynamically add items to bound menuitem then use contextmenu show first time you are able to select the dynamically added menu item the dynamically added item is removed in the onhide event the second time you right click the dynamic item is added again but a null exception occurs this was working fine with primeng and when updating to the bug was introduced and exists in expected behavior not throw null reference exception minimal reproduction of the problem with instructions if the current behavior is a bug or you can illustrate your feature request better with an example please provide the steps to reproduce and if possible a minimal demo of the problem via or similar you can use this template as a starting point see above please tell us about your environment windows angular version x bug first appears with angular but present in and primeng version x bug first appeared with primeng present in and browser only tested with chrome version on windows language node for aot issues node version with or without ivy | 0 |
2,608 | 8,849,759,583 | IssuesEvent | 2019-01-08 11:10:22 | dzavalishin/mqtt_udp | https://api.github.com/repos/dzavalishin/mqtt_udp | opened | Python code: logging and verbose mode | Maintain enhancement | ```
verbose = cfg.getboolean('verbose' )
import logging as log
log.basicConfig(filename="openhab_gate.log", level=log.INFO)
log.info("text")
``` | True | Python code: logging and verbose mode - ```
verbose = cfg.getboolean('verbose' )
import logging as log
log.basicConfig(filename="openhab_gate.log", level=log.INFO)
log.info("text")
``` | main | python code logging and verbose mode verbose cfg getboolean verbose import logging as log log basicconfig filename openhab gate log level log info log info text | 1 |
76,454 | 9,941,818,205 | IssuesEvent | 2019-07-03 12:34:14 | rflamary/POT | https://api.github.com/repos/rflamary/POT | closed | Cannot run gpu modules | documentation | Hello,
I am trying out the GPU implementation of the sinkhorn transport, but with not much success.
```
>>> a=[.5,.5]
>>> b=[.5,.5]
>>> M=[[0.,1.],[1.,0.]]
>>> ot.gpu.sinkhorn(a,b,M,1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'ot' has no attribute 'gpu'
```
However, the ot.sinkhorn(a,b,M,1) works as expected.
I have cupy installed as well as the CUDA SDK.
Could someone help? | 1.0 | Cannot run gpu modules - Hello,
I am trying out the GPU implementation of the sinkhorn transport, but with not much success.
```
>>> a=[.5,.5]
>>> b=[.5,.5]
>>> M=[[0.,1.],[1.,0.]]
>>> ot.gpu.sinkhorn(a,b,M,1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'ot' has no attribute 'gpu'
```
However, the ot.sinkhorn(a,b,M,1) works as expected.
I have cupy installed as well as the CUDA SDK.
Could someone help? | non_main | cannot run gpu modules hello i am trying out the gpu implementation of the sinkhorn transport but with not much success a b m ot gpu sinkhorn a b m traceback most recent call last file line in attributeerror module ot has no attribute gpu however the ot sinkhorn a b m works as expected i have cupy installed as well as the cuda sdk could someone help | 0 |
1,090 | 4,952,809,515 | IssuesEvent | 2016-12-01 13:17:08 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | Try to set selinux with sefcontext returns UnicodeEncodeError | affects_2.2 bug_report waiting_on_maintainer |
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
sefcontext
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
Ansible execution:
- Fedora 25
- Python 2.7.12
Managed Host:
- Fedora 24
- Python 2.7.12
##### SUMMARY
Trying to set selinux fcontext fails with error : UnicodeEncodeError: 'ascii' codec can't encode character u'\\xe9' in position 8: ordinal not in range(128)
##### STEPS TO REPRODUCE
Execute following plabook on Host.
```
- name: Test selinux
sefcontext: target='/srv/git_repos(/.*)?' setype=httpd_git_rw_content_t state=present
```
##### EXPECTED RESULTS
Selinux context checked/changes
##### ACTUAL RESULTS
```
Using /etc/ansible/ansible.cfg as config file
statically included: /home/sok/Documents/Informatique/Plateformes/serveurPerso/genericHandlers/reboot.yml
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: peroServers.yml ******************************************************
1 plays in peroServers.yml
PLAY [fullservers] *************************************************************
TASK [setup] *******************************************************************
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C web1.infra '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526 `" && echo ansible-tmp-1480287343.32-243896499060526="` echo $HOME/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526 `" ) && sleep 0'"'"''
<web1.infra> PUT /tmp/tmpICmQcI TO /home/ligodith/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526/setup.py
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set sftp_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C '[web1.infra]'
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C web1.infra '/bin/sh -c '"'"'chmod u+x /home/ligodith/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526/ /home/ligodith/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526/setup.py && sleep 0'"'"''
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C -tt web1.infra '/bin/sh -c '"'"'/usr/bin/python /home/ligodith/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526/setup.py; rm -rf "/home/ligodith/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526/" > /dev/null 2>&1 && sleep 0'"'"''
ok: [web1.infra]
TASK [webServer : Test selinux] ************************************************
task path: /home/sok/Documents/Informatique/Plateformes/serveurPerso/roles/webServer/tasks/main.yml:44
Using module file /usr/lib/python2.7/site-packages/ansible/modules/extras/system/sefcontext.py
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C web1.infra '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348 `" && echo ansible-tmp-1480287344.14-116222128683348="` echo $HOME/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348 `" ) && sleep 0'"'"''
<web1.infra> PUT /tmp/tmpebPXbF TO /home/ligodith/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348/sefcontext.py
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set sftp_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C '[web1.infra]'
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C web1.infra '/bin/sh -c '"'"'chmod u+x /home/ligodith/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348/ /home/ligodith/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348/sefcontext.py && sleep 0'"'"''
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C -tt web1.infra '/bin/sh -c '"'"'/usr/bin/python /home/ligodith/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348/sefcontext.py; rm -rf "/home/ligodith/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348/" > /dev/null 2>&1 && sleep 0'"'"''
fatal: [web1.infra]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "sefcontext"
},
"module_stderr": "OpenSSH_7.3p1, OpenSSL 1.0.2j-fips 26 Sep 2016\r\ndebug1: Reading configuration data /home/sok/.ssh/config\r\ndebug1: /home/sok/.ssh/config line 48: Applying options for *\r\ndebug1: /home/sok/.ssh/config line 64: Applying options for gre*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug3: /etc/ssh/ssh_config line 56: Including file /etc/ssh/ssh_config.d/05-redhat.conf depth 0\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug1: /etc/ssh/ssh_config.d/05-redhat.conf line 2: include /etc/crypto-policies/back-ends/openssh.txt matched no files\r\ndebug1: /etc/ssh/ssh_config.d/05-redhat.conf line 8: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 15831\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\nX11 forwarding request failed\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to web1.infra closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_5hHAPf/ansible_module_sefcontext.py\", line 246, in <module>\r\n main()\r\n File \"/tmp/ansible_5hHAPf/ansible_module_sefcontext.py\", line 238, in main\r\n semanage_fcontext_modify(module, result, target, ftype, setype, do_reload, serange, seuser)\r\n File \"/tmp/ansible_5hHAPf/ansible_module_sefcontext.py\", line 163, in semanage_fcontext_modify\r\n module.fail_json(msg=\"%s: %s\\n\" % (e.__class__.__name__, str(e)))\r\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\xe9' in position 8: ordinal not in range(128)\r\n",
"msg": "MODULE FAILURE"
}
to retry, use: --limit @/home/sok/Documents/Informatique/Plateformes/serveurPerso/peroServers.retry
PLAY RECAP *********************************************************************
web1.infra : ok=1 changed=0 unreachable=0 failed=1
```
| True | Try to set selinux with sefcontext returns UnicodeEncodeError -
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
sefcontext
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
Ansible execution:
- Fedora 25
- Python 2.7.12
Managed Host:
- Fedora 24
- Python 2.7.12
##### SUMMARY
Trying to set selinux fcontext fails with error : UnicodeEncodeError: 'ascii' codec can't encode character u'\\xe9' in position 8: ordinal not in range(128)
##### STEPS TO REPRODUCE
Execute following plabook on Host.
```
- name: Test selinux
sefcontext: target='/srv/git_repos(/.*)?' setype=httpd_git_rw_content_t state=present
```
##### EXPECTED RESULTS
Selinux context checked/changes
##### ACTUAL RESULTS
```
Using /etc/ansible/ansible.cfg as config file
statically included: /home/sok/Documents/Informatique/Plateformes/serveurPerso/genericHandlers/reboot.yml
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: peroServers.yml ******************************************************
1 plays in peroServers.yml
PLAY [fullservers] *************************************************************
TASK [setup] *******************************************************************
Using module file /usr/lib/python2.7/site-packages/ansible/modules/core/system/setup.py
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C web1.infra '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526 `" && echo ansible-tmp-1480287343.32-243896499060526="` echo $HOME/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526 `" ) && sleep 0'"'"''
<web1.infra> PUT /tmp/tmpICmQcI TO /home/ligodith/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526/setup.py
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set sftp_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C '[web1.infra]'
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C web1.infra '/bin/sh -c '"'"'chmod u+x /home/ligodith/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526/ /home/ligodith/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526/setup.py && sleep 0'"'"''
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C -tt web1.infra '/bin/sh -c '"'"'/usr/bin/python /home/ligodith/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526/setup.py; rm -rf "/home/ligodith/.ansible/tmp/ansible-tmp-1480287343.32-243896499060526/" > /dev/null 2>&1 && sleep 0'"'"''
ok: [web1.infra]
TASK [webServer : Test selinux] ************************************************
task path: /home/sok/Documents/Informatique/Plateformes/serveurPerso/roles/webServer/tasks/main.yml:44
Using module file /usr/lib/python2.7/site-packages/ansible/modules/extras/system/sefcontext.py
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C web1.infra '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348 `" && echo ansible-tmp-1480287344.14-116222128683348="` echo $HOME/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348 `" ) && sleep 0'"'"''
<web1.infra> PUT /tmp/tmpebPXbF TO /home/ligodith/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348/sefcontext.py
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set sftp_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC sftp -b - -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C '[web1.infra]'
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C web1.infra '/bin/sh -c '"'"'chmod u+x /home/ligodith/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348/ /home/ligodith/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348/sefcontext.py && sleep 0'"'"''
<web1.infra> ESTABLISH SSH CONNECTION FOR USER: ligodith
<web1.infra> SSH: ansible.cfg set ssh_args: (-C)(-o)(ControlMaster=auto)(-o)(ControlPersist=60s)
<web1.infra> SSH: ANSIBLE_REMOTE_PORT/remote_port/ansible_port set: (-o)(Port=4242)
<web1.infra> SSH: ansible_password/ansible_ssh_pass not set: (-o)(KbdInteractiveAuthentication=no)(-o)(PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey)(-o)(PasswordAuthentication=no)
<web1.infra> SSH: ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set: (-o)(User=ligodith)
<web1.infra> SSH: ANSIBLE_TIMEOUT/timeout set: (-o)(ConnectTimeout=10)
<web1.infra> SSH: PlayContext set ssh_common_args: ()
<web1.infra> SSH: PlayContext set ssh_extra_args: ()
<web1.infra> SSH: found only ControlPersist; added ControlPath: (-o)(ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C)
<web1.infra> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o Port=4242 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ligodith -o ConnectTimeout=10 -o ControlPath=/home/sok/.ansible/cp/ansible-ssh-%C -tt web1.infra '/bin/sh -c '"'"'/usr/bin/python /home/ligodith/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348/sefcontext.py; rm -rf "/home/ligodith/.ansible/tmp/ansible-tmp-1480287344.14-116222128683348/" > /dev/null 2>&1 && sleep 0'"'"''
fatal: [web1.infra]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_name": "sefcontext"
},
"module_stderr": "OpenSSH_7.3p1, OpenSSL 1.0.2j-fips 26 Sep 2016\r\ndebug1: Reading configuration data /home/sok/.ssh/config\r\ndebug1: /home/sok/.ssh/config line 48: Applying options for *\r\ndebug1: /home/sok/.ssh/config line 64: Applying options for gre*\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug3: /etc/ssh/ssh_config line 56: Including file /etc/ssh/ssh_config.d/05-redhat.conf depth 0\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug1: /etc/ssh/ssh_config.d/05-redhat.conf line 2: include /etc/crypto-policies/back-ends/openssh.txt matched no files\r\ndebug1: /etc/ssh/ssh_config.d/05-redhat.conf line 8: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 15831\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\nX11 forwarding request failed\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\nShared connection to web1.infra closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_5hHAPf/ansible_module_sefcontext.py\", line 246, in <module>\r\n main()\r\n File \"/tmp/ansible_5hHAPf/ansible_module_sefcontext.py\", line 238, in main\r\n semanage_fcontext_modify(module, result, target, ftype, setype, do_reload, serange, seuser)\r\n File \"/tmp/ansible_5hHAPf/ansible_module_sefcontext.py\", line 163, in semanage_fcontext_modify\r\n module.fail_json(msg=\"%s: %s\\n\" % (e.__class__.__name__, str(e)))\r\nUnicodeEncodeError: 'ascii' codec can't encode character u'\\xe9' in position 8: ordinal not in range(128)\r\n",
"msg": "MODULE FAILURE"
}
to retry, use: --limit @/home/sok/Documents/Informatique/Plateformes/serveurPerso/peroServers.retry
PLAY RECAP *********************************************************************
web1.infra : ok=1 changed=0 unreachable=0 failed=1
```
| main | try to set selinux with sefcontext returns unicodeencodeerror issue type bug report component name sefcontext ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment ansible execution fedora python managed host fedora python summary trying to set selinux fcontext fails with error unicodeencodeerror ascii codec can t encode character u in position ordinal not in range steps to reproduce execute following plabook on host name test selinux sefcontext target srv git repos setype httpd git rw content t state present expected results selinux context checked changes actual results using etc ansible ansible cfg as config file statically included home sok documents informatique plateformes serveurperso generichandlers reboot yml loading callback plugin default of type stdout from usr lib site packages ansible plugins callback init pyc playbook peroservers yml plays in peroservers yml play task using module file usr lib site packages ansible modules core system setup py establish ssh connection for user ligodith ssh ansible cfg set ssh args c o controlmaster auto o controlpersist ssh ansible remote port remote port ansible port set o port ssh ansible password ansible ssh pass not set o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no ssh ansible remote user remote user ansible user user u set o user ligodith ssh ansible timeout timeout set o connecttimeout ssh playcontext set ssh common args ssh playcontext set ssh extra args ssh found only controlpersist added controlpath o controlpath home sok ansible cp ansible ssh c ssh exec ssh vvv c o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ligodith o connecttimeout o controlpath home sok ansible cp ansible ssh c infra bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpicmqci to home ligodith ansible tmp ansible tmp setup py ssh ansible cfg set ssh args c o controlmaster auto o controlpersist ssh ansible remote port remote port ansible port set o port ssh ansible password ansible ssh pass not set o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no ssh ansible remote user remote user ansible user user u set o user ligodith ssh ansible timeout timeout set o connecttimeout ssh playcontext set ssh common args ssh playcontext set sftp extra args ssh found only controlpersist added controlpath o controlpath home sok ansible cp ansible ssh c ssh exec sftp b vvv c o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ligodith o connecttimeout o controlpath home sok ansible cp ansible ssh c establish ssh connection for user ligodith ssh ansible cfg set ssh args c o controlmaster auto o controlpersist ssh ansible remote port remote port ansible port set o port ssh ansible password ansible ssh pass not set o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no ssh ansible remote user remote user ansible user user u set o user ligodith ssh ansible timeout timeout set o connecttimeout ssh playcontext set ssh common args ssh playcontext set ssh extra args ssh found only controlpersist added controlpath o controlpath home sok ansible cp ansible ssh c ssh exec ssh vvv c o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ligodith o connecttimeout o controlpath home sok ansible cp ansible ssh c infra bin sh c chmod u x home ligodith ansible tmp ansible tmp home ligodith ansible tmp ansible tmp setup py sleep establish ssh connection for user ligodith ssh ansible cfg set ssh args c o controlmaster auto o controlpersist ssh ansible remote port remote port ansible port set o port ssh ansible password ansible ssh pass not set o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no ssh ansible remote user remote user ansible user user u set o user ligodith ssh ansible timeout timeout set o connecttimeout ssh playcontext set ssh common args ssh playcontext set ssh extra args ssh found only controlpersist added controlpath o controlpath home sok ansible cp ansible ssh c ssh exec ssh vvv c o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ligodith o connecttimeout o controlpath home sok ansible cp ansible ssh c tt infra bin sh c usr bin python home ligodith ansible tmp ansible tmp setup py rm rf home ligodith ansible tmp ansible tmp dev null sleep ok task task path home sok documents informatique plateformes serveurperso roles webserver tasks main yml using module file usr lib site packages ansible modules extras system sefcontext py establish ssh connection for user ligodith ssh ansible cfg set ssh args c o controlmaster auto o controlpersist ssh ansible remote port remote port ansible port set o port ssh ansible password ansible ssh pass not set o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no ssh ansible remote user remote user ansible user user u set o user ligodith ssh ansible timeout timeout set o connecttimeout ssh playcontext set ssh common args ssh playcontext set ssh extra args ssh found only controlpersist added controlpath o controlpath home sok ansible cp ansible ssh c ssh exec ssh vvv c o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ligodith o connecttimeout o controlpath home sok ansible cp ansible ssh c infra bin sh c umask mkdir p echo home ansible tmp ansible tmp echo ansible tmp echo home ansible tmp ansible tmp sleep put tmp tmpebpxbf to home ligodith ansible tmp ansible tmp sefcontext py ssh ansible cfg set ssh args c o controlmaster auto o controlpersist ssh ansible remote port remote port ansible port set o port ssh ansible password ansible ssh pass not set o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no ssh ansible remote user remote user ansible user user u set o user ligodith ssh ansible timeout timeout set o connecttimeout ssh playcontext set ssh common args ssh playcontext set sftp extra args ssh found only controlpersist added controlpath o controlpath home sok ansible cp ansible ssh c ssh exec sftp b vvv c o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ligodith o connecttimeout o controlpath home sok ansible cp ansible ssh c establish ssh connection for user ligodith ssh ansible cfg set ssh args c o controlmaster auto o controlpersist ssh ansible remote port remote port ansible port set o port ssh ansible password ansible ssh pass not set o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no ssh ansible remote user remote user ansible user user u set o user ligodith ssh ansible timeout timeout set o connecttimeout ssh playcontext set ssh common args ssh playcontext set ssh extra args ssh found only controlpersist added controlpath o controlpath home sok ansible cp ansible ssh c ssh exec ssh vvv c o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ligodith o connecttimeout o controlpath home sok ansible cp ansible ssh c infra bin sh c chmod u x home ligodith ansible tmp ansible tmp home ligodith ansible tmp ansible tmp sefcontext py sleep establish ssh connection for user ligodith ssh ansible cfg set ssh args c o controlmaster auto o controlpersist ssh ansible remote port remote port ansible port set o port ssh ansible password ansible ssh pass not set o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no ssh ansible remote user remote user ansible user user u set o user ligodith ssh ansible timeout timeout set o connecttimeout ssh playcontext set ssh common args ssh playcontext set ssh extra args ssh found only controlpersist added controlpath o controlpath home sok ansible cp ansible ssh c ssh exec ssh vvv c o controlmaster auto o controlpersist o port o kbdinteractiveauthentication no o preferredauthentications gssapi with mic gssapi keyex hostbased publickey o passwordauthentication no o user ligodith o connecttimeout o controlpath home sok ansible cp ansible ssh c tt infra bin sh c usr bin python home ligodith ansible tmp ansible tmp sefcontext py rm rf home ligodith ansible tmp ansible tmp dev null sleep fatal failed changed false failed true invocation module name sefcontext module stderr openssh openssl fips sep r reading configuration data home sok ssh config r home sok ssh config line applying options for r home sok ssh config line applying options for gre r reading configuration data etc ssh ssh config r etc ssh ssh config line including file etc ssh ssh config d redhat conf depth r reading configuration data etc ssh ssh config d redhat conf r etc ssh ssh config d redhat conf line include etc crypto policies back ends openssh txt matched no files r etc ssh ssh config d redhat conf line applying options for r auto mux trying existing master r fd setting o nonblock r mux client hello exchange master version r mux client forwards request forwardings local remote r mux client request session entering r mux client request alive entering r mux client request alive done pid r mux client request session session request sent r mux client request session master session id r forwarding request failed r mux client read packet read header failed broken pipe r received exit status from master r nshared connection to infra closed r n module stdout traceback most recent call last r n file tmp ansible ansible module sefcontext py line in r n main r n file tmp ansible ansible module sefcontext py line in main r n semanage fcontext modify module result target ftype setype do reload serange seuser r n file tmp ansible ansible module sefcontext py line in semanage fcontext modify r n module fail json msg s s n e class name str e r nunicodeencodeerror ascii codec can t encode character u in position ordinal not in range r n msg module failure to retry use limit home sok documents informatique plateformes serveurperso peroservers retry play recap infra ok changed unreachable failed | 1 |
112,388 | 14,244,128,066 | IssuesEvent | 2020-11-19 06:15:34 | urbit/landscape | https://api.github.com/repos/urbit/landscape | closed | landscape: setting root profile should set profile everywhere | design | "Contacts per group" doesn't really make sense, and should be architected to be more sane, allowing you to store one profile per ship you interact with. | 1.0 | landscape: setting root profile should set profile everywhere - "Contacts per group" doesn't really make sense, and should be architected to be more sane, allowing you to store one profile per ship you interact with. | non_main | landscape setting root profile should set profile everywhere contacts per group doesn t really make sense and should be architected to be more sane allowing you to store one profile per ship you interact with | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.